00:00:00.009 Started by upstream project "autotest-per-patch" build number 127120 00:00:00.009 originally caused by: 00:00:00.009 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.100 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.228 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.170 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.182 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.192 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.193 > git config core.sparsecheckout # timeout=10 00:00:04.203 > git read-tree -mu HEAD # timeout=10 00:00:04.218 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.248 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.249 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.337 [Pipeline] Start of Pipeline 00:00:04.349 [Pipeline] library 00:00:04.350 Loading library shm_lib@master 00:00:04.350 Library shm_lib@master is cached. Copying from home. 00:00:04.362 [Pipeline] node 00:00:04.373 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.374 [Pipeline] { 00:00:04.381 [Pipeline] catchError 00:00:04.382 [Pipeline] { 00:00:04.390 [Pipeline] wrap 00:00:04.396 [Pipeline] { 00:00:04.402 [Pipeline] stage 00:00:04.403 [Pipeline] { (Prologue) 00:00:04.414 [Pipeline] echo 00:00:04.415 Node: VM-host-SM16 00:00:04.419 [Pipeline] cleanWs 00:00:04.425 [WS-CLEANUP] Deleting project workspace... 00:00:04.425 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.429 [WS-CLEANUP] done 00:00:04.586 [Pipeline] setCustomBuildProperty 00:00:04.650 [Pipeline] httpRequest 00:00:04.674 [Pipeline] echo 00:00:04.676 Sorcerer 10.211.164.101 is alive 00:00:04.682 [Pipeline] httpRequest 00:00:04.686 HttpMethod: GET 00:00:04.686 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.688 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.698 Response Code: HTTP/1.1 200 OK 00:00:04.698 Success: Status code 200 is in the accepted range: 200,404 00:00:04.699 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.990 [Pipeline] sh 00:00:09.268 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.282 [Pipeline] httpRequest 00:00:09.297 [Pipeline] echo 00:00:09.298 Sorcerer 10.211.164.101 is alive 00:00:09.305 [Pipeline] httpRequest 00:00:09.309 HttpMethod: GET 00:00:09.309 URL: http://10.211.164.101/packages/spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:00:09.310 Sending request to url: http://10.211.164.101/packages/spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:00:09.331 Response Code: HTTP/1.1 200 OK 00:00:09.331 Success: Status code 200 is in the accepted range: 200,404 00:00:09.332 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:03:29.435 [Pipeline] sh 00:03:29.715 + tar --no-same-owner -xf spdk_3c25cfe1d27e578d46d5823ea704025d22b41d86.tar.gz 00:03:33.008 [Pipeline] sh 00:03:33.287 + git -C spdk log --oneline -n5 00:03:33.288 3c25cfe1d raid: Generic changes to support DIF/DIX for RAID 00:03:33.288 0e983c564 nvmf/tcp: use sock group polling for the listening sockets 00:03:33.288 cff943742 nvmf/tcp: add transport field to the spdk_nvmf_tcp_port struct 00:03:33.288 13fe888c9 nvmf: add helper function to get a transport poll group 00:03:33.288 02f272e46 test/dma: Fix ibv_reg_mr usage 00:03:33.306 [Pipeline] writeFile 00:03:33.321 [Pipeline] sh 00:03:33.598 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:33.611 [Pipeline] sh 00:03:33.890 + cat autorun-spdk.conf 00:03:33.890 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.890 SPDK_TEST_NVMF=1 00:03:33.890 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:33.890 SPDK_TEST_URING=1 00:03:33.890 SPDK_TEST_USDT=1 00:03:33.890 SPDK_RUN_UBSAN=1 00:03:33.890 NET_TYPE=virt 00:03:33.890 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.896 RUN_NIGHTLY=0 00:03:33.898 [Pipeline] } 00:03:33.913 [Pipeline] // stage 00:03:33.927 [Pipeline] stage 00:03:33.928 [Pipeline] { (Run VM) 00:03:33.940 [Pipeline] sh 00:03:34.214 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:34.214 + echo 'Start stage prepare_nvme.sh' 00:03:34.214 Start stage prepare_nvme.sh 00:03:34.214 + [[ -n 3 ]] 00:03:34.214 + disk_prefix=ex3 00:03:34.214 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:03:34.214 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:03:34.214 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:03:34.214 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:34.214 ++ SPDK_TEST_NVMF=1 00:03:34.214 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:34.214 ++ SPDK_TEST_URING=1 00:03:34.214 ++ SPDK_TEST_USDT=1 00:03:34.214 ++ SPDK_RUN_UBSAN=1 00:03:34.214 ++ NET_TYPE=virt 00:03:34.214 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:34.214 ++ RUN_NIGHTLY=0 00:03:34.214 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:34.214 + nvme_files=() 00:03:34.214 + declare -A nvme_files 00:03:34.214 + backend_dir=/var/lib/libvirt/images/backends 00:03:34.214 + nvme_files['nvme.img']=5G 00:03:34.214 + nvme_files['nvme-cmb.img']=5G 00:03:34.214 + nvme_files['nvme-multi0.img']=4G 00:03:34.215 + nvme_files['nvme-multi1.img']=4G 00:03:34.215 + nvme_files['nvme-multi2.img']=4G 00:03:34.215 + nvme_files['nvme-openstack.img']=8G 00:03:34.215 + nvme_files['nvme-zns.img']=5G 00:03:34.215 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:34.215 + (( SPDK_TEST_FTL == 1 )) 00:03:34.215 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:34.215 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:34.215 + for nvme in "${!nvme_files[@]}" 00:03:34.215 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:03:34.215 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:34.215 + for nvme in "${!nvme_files[@]}" 00:03:34.215 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:03:35.167 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:35.167 + for nvme in "${!nvme_files[@]}" 00:03:35.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:03:35.167 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:35.167 + for nvme in "${!nvme_files[@]}" 00:03:35.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:03:35.167 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:35.167 + for nvme in "${!nvme_files[@]}" 00:03:35.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:03:35.167 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:35.167 + for nvme in "${!nvme_files[@]}" 00:03:35.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:03:35.167 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:35.167 + for nvme in "${!nvme_files[@]}" 00:03:35.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:03:36.101 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:36.101 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:03:36.101 + echo 'End stage prepare_nvme.sh' 00:03:36.101 End stage prepare_nvme.sh 00:03:36.112 [Pipeline] sh 00:03:36.389 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:36.389 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:03:36.389 00:03:36.389 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:03:36.389 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:03:36.389 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:36.389 HELP=0 00:03:36.389 DRY_RUN=0 00:03:36.389 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:03:36.389 NVME_DISKS_TYPE=nvme,nvme, 00:03:36.389 NVME_AUTO_CREATE=0 00:03:36.389 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:03:36.389 NVME_CMB=,, 00:03:36.389 NVME_PMR=,, 00:03:36.389 NVME_ZNS=,, 00:03:36.389 NVME_MS=,, 00:03:36.389 NVME_FDP=,, 00:03:36.389 SPDK_VAGRANT_DISTRO=fedora38 00:03:36.389 SPDK_VAGRANT_VMCPU=10 00:03:36.389 SPDK_VAGRANT_VMRAM=12288 00:03:36.389 SPDK_VAGRANT_PROVIDER=libvirt 00:03:36.389 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:36.389 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:36.389 SPDK_OPENSTACK_NETWORK=0 00:03:36.389 VAGRANT_PACKAGE_BOX=0 00:03:36.389 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:03:36.389 FORCE_DISTRO=true 00:03:36.389 VAGRANT_BOX_VERSION= 00:03:36.389 EXTRA_VAGRANTFILES= 00:03:36.389 NIC_MODEL=e1000 00:03:36.389 00:03:36.389 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:03:36.389 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:03:39.669 Bringing machine 'default' up with 'libvirt' provider... 00:03:40.234 ==> default: Creating image (snapshot of base box volume). 00:03:40.234 ==> default: Creating domain with the following settings... 00:03:40.234 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721862242_51ea64dced21e7c64ac5 00:03:40.234 ==> default: -- Domain type: kvm 00:03:40.234 ==> default: -- Cpus: 10 00:03:40.234 ==> default: -- Feature: acpi 00:03:40.234 ==> default: -- Feature: apic 00:03:40.234 ==> default: -- Feature: pae 00:03:40.234 ==> default: -- Memory: 12288M 00:03:40.234 ==> default: -- Memory Backing: hugepages: 00:03:40.234 ==> default: -- Management MAC: 00:03:40.234 ==> default: -- Loader: 00:03:40.234 ==> default: -- Nvram: 00:03:40.234 ==> default: -- Base box: spdk/fedora38 00:03:40.234 ==> default: -- Storage pool: default 00:03:40.234 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721862242_51ea64dced21e7c64ac5.img (20G) 00:03:40.234 ==> default: -- Volume Cache: default 00:03:40.234 ==> default: -- Kernel: 00:03:40.234 ==> default: -- Initrd: 00:03:40.234 ==> default: -- Graphics Type: vnc 00:03:40.234 ==> default: -- Graphics Port: -1 00:03:40.234 ==> default: -- Graphics IP: 127.0.0.1 00:03:40.234 ==> default: -- Graphics Password: Not defined 00:03:40.234 ==> default: -- Video Type: cirrus 00:03:40.234 ==> default: -- Video VRAM: 9216 00:03:40.234 ==> default: -- Sound Type: 00:03:40.234 ==> default: -- Keymap: en-us 00:03:40.234 ==> default: -- TPM Path: 00:03:40.234 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:40.234 ==> default: -- Command line args: 00:03:40.234 ==> default: -> value=-device, 00:03:40.234 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:40.234 ==> default: -> value=-drive, 00:03:40.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:03:40.234 ==> default: -> value=-device, 00:03:40.234 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.234 ==> default: -> value=-device, 00:03:40.234 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:40.234 ==> default: -> value=-drive, 00:03:40.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:40.234 ==> default: -> value=-device, 00:03:40.234 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.234 ==> default: -> value=-drive, 00:03:40.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:40.234 ==> default: -> value=-device, 00:03:40.234 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.234 ==> default: -> value=-drive, 00:03:40.234 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:40.234 ==> default: -> value=-device, 00:03:40.234 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:40.492 ==> default: Creating shared folders metadata... 00:03:40.492 ==> default: Starting domain. 00:03:41.875 ==> default: Waiting for domain to get an IP address... 00:03:59.953 ==> default: Waiting for SSH to become available... 00:03:59.953 ==> default: Configuring and enabling network interfaces... 00:04:04.136 default: SSH address: 192.168.121.196:22 00:04:04.136 default: SSH username: vagrant 00:04:04.136 default: SSH auth method: private key 00:04:05.509 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:13.617 ==> default: Mounting SSHFS shared folder... 00:04:14.990 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:04:14.990 ==> default: Checking Mount.. 00:04:15.923 ==> default: Folder Successfully Mounted! 00:04:15.923 ==> default: Running provisioner: file... 00:04:16.857 default: ~/.gitconfig => .gitconfig 00:04:17.134 00:04:17.134 SUCCESS! 00:04:17.134 00:04:17.134 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:04:17.134 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:17.134 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:04:17.134 00:04:17.142 [Pipeline] } 00:04:17.159 [Pipeline] // stage 00:04:17.167 [Pipeline] dir 00:04:17.167 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:04:17.169 [Pipeline] { 00:04:17.182 [Pipeline] catchError 00:04:17.183 [Pipeline] { 00:04:17.193 [Pipeline] sh 00:04:17.470 + vagrant ssh-config --host vagrant 00:04:17.470 + sed -ne /^Host/,$p+ 00:04:17.470 tee ssh_conf 00:04:21.649 Host vagrant 00:04:21.649 HostName 192.168.121.196 00:04:21.649 User vagrant 00:04:21.649 Port 22 00:04:21.649 UserKnownHostsFile /dev/null 00:04:21.649 StrictHostKeyChecking no 00:04:21.649 PasswordAuthentication no 00:04:21.649 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:04:21.649 IdentitiesOnly yes 00:04:21.649 LogLevel FATAL 00:04:21.649 ForwardAgent yes 00:04:21.649 ForwardX11 yes 00:04:21.649 00:04:21.661 [Pipeline] withEnv 00:04:21.663 [Pipeline] { 00:04:21.677 [Pipeline] sh 00:04:21.954 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:21.954 source /etc/os-release 00:04:21.954 [[ -e /image.version ]] && img=$(< /image.version) 00:04:21.954 # Minimal, systemd-like check. 00:04:21.954 if [[ -e /.dockerenv ]]; then 00:04:21.954 # Clear garbage from the node's name: 00:04:21.954 # agt-er_autotest_547-896 -> autotest_547-896 00:04:21.954 # $HOSTNAME is the actual container id 00:04:21.954 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:21.954 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:21.954 # We can assume this is a mount from a host where container is running, 00:04:21.954 # so fetch its hostname to easily identify the target swarm worker. 00:04:21.954 container="$(< /etc/hostname) ($agent)" 00:04:21.954 else 00:04:21.954 # Fallback 00:04:21.954 container=$agent 00:04:21.954 fi 00:04:21.954 fi 00:04:21.954 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:21.954 00:04:21.964 [Pipeline] } 00:04:21.984 [Pipeline] // withEnv 00:04:21.993 [Pipeline] setCustomBuildProperty 00:04:22.011 [Pipeline] stage 00:04:22.013 [Pipeline] { (Tests) 00:04:22.032 [Pipeline] sh 00:04:22.310 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:22.581 [Pipeline] sh 00:04:22.857 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:22.872 [Pipeline] timeout 00:04:22.872 Timeout set to expire in 30 min 00:04:22.874 [Pipeline] { 00:04:22.891 [Pipeline] sh 00:04:23.168 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:23.733 HEAD is now at 3c25cfe1d raid: Generic changes to support DIF/DIX for RAID 00:04:23.746 [Pipeline] sh 00:04:24.024 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:24.293 [Pipeline] sh 00:04:24.570 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:24.587 [Pipeline] sh 00:04:24.865 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:04:24.865 ++ readlink -f spdk_repo 00:04:24.865 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:24.865 + [[ -n /home/vagrant/spdk_repo ]] 00:04:24.865 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:24.865 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:24.865 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:24.865 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:24.865 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:24.865 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:04:24.865 + cd /home/vagrant/spdk_repo 00:04:24.865 + source /etc/os-release 00:04:24.865 ++ NAME='Fedora Linux' 00:04:24.865 ++ VERSION='38 (Cloud Edition)' 00:04:24.865 ++ ID=fedora 00:04:24.865 ++ VERSION_ID=38 00:04:24.865 ++ VERSION_CODENAME= 00:04:24.865 ++ PLATFORM_ID=platform:f38 00:04:24.865 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:24.865 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:24.865 ++ LOGO=fedora-logo-icon 00:04:24.865 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:24.865 ++ HOME_URL=https://fedoraproject.org/ 00:04:24.865 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:24.865 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:24.865 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:24.865 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:24.865 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:24.865 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:24.865 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:24.865 ++ SUPPORT_END=2024-05-14 00:04:24.865 ++ VARIANT='Cloud Edition' 00:04:24.865 ++ VARIANT_ID=cloud 00:04:24.865 + uname -a 00:04:24.865 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:24.865 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:25.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.430 Hugepages 00:04:25.430 node hugesize free / total 00:04:25.430 node0 1048576kB 0 / 0 00:04:25.430 node0 2048kB 0 / 0 00:04:25.430 00:04:25.430 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.430 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:25.430 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:25.430 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:25.430 + rm -f /tmp/spdk-ld-path 00:04:25.430 + source autorun-spdk.conf 00:04:25.430 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:25.430 ++ SPDK_TEST_NVMF=1 00:04:25.430 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:25.430 ++ SPDK_TEST_URING=1 00:04:25.430 ++ SPDK_TEST_USDT=1 00:04:25.430 ++ SPDK_RUN_UBSAN=1 00:04:25.430 ++ NET_TYPE=virt 00:04:25.430 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:25.430 ++ RUN_NIGHTLY=0 00:04:25.430 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:25.430 + [[ -n '' ]] 00:04:25.430 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:25.689 + for M in /var/spdk/build-*-manifest.txt 00:04:25.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:25.689 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:25.689 + for M in /var/spdk/build-*-manifest.txt 00:04:25.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:25.689 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:25.689 ++ uname 00:04:25.689 + [[ Linux == \L\i\n\u\x ]] 00:04:25.689 + sudo dmesg -T 00:04:25.689 + sudo dmesg --clear 00:04:25.689 + dmesg_pid=5268 00:04:25.689 + sudo dmesg -Tw 00:04:25.689 + [[ Fedora Linux == FreeBSD ]] 00:04:25.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:25.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:25.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:25.689 + [[ -x /usr/src/fio-static/fio ]] 00:04:25.689 + export FIO_BIN=/usr/src/fio-static/fio 00:04:25.689 + FIO_BIN=/usr/src/fio-static/fio 00:04:25.689 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:25.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:25.689 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:25.689 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:25.689 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:25.689 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:25.689 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:25.689 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:25.689 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:25.689 Test configuration: 00:04:25.689 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:25.689 SPDK_TEST_NVMF=1 00:04:25.689 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:25.689 SPDK_TEST_URING=1 00:04:25.689 SPDK_TEST_USDT=1 00:04:25.689 SPDK_RUN_UBSAN=1 00:04:25.689 NET_TYPE=virt 00:04:25.689 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:25.689 RUN_NIGHTLY=0 23:04:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.689 23:04:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:25.689 23:04:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.689 23:04:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.689 23:04:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.689 23:04:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.689 23:04:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.689 23:04:48 -- paths/export.sh@5 -- $ export PATH 00:04:25.689 23:04:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.689 23:04:48 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:25.689 23:04:48 -- common/autobuild_common.sh@444 -- $ date +%s 00:04:25.689 23:04:48 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721862288.XXXXXX 00:04:25.689 23:04:48 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721862288.BoGjQt 00:04:25.689 23:04:48 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:04:25.689 23:04:48 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:04:25.689 23:04:48 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:25.689 23:04:48 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:25.689 23:04:48 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:25.689 23:04:48 -- common/autobuild_common.sh@460 -- $ get_config_params 00:04:25.689 23:04:48 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:04:25.689 23:04:48 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.689 23:04:48 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:04:25.689 23:04:48 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:04:25.689 23:04:48 -- pm/common@17 -- $ local monitor 00:04:25.689 23:04:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.689 23:04:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.689 23:04:48 -- pm/common@21 -- $ date +%s 00:04:25.689 23:04:48 -- pm/common@25 -- $ sleep 1 00:04:25.689 23:04:48 -- pm/common@21 -- $ date +%s 00:04:25.689 23:04:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721862288 00:04:25.689 23:04:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721862288 00:04:25.689 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721862288_collect-vmstat.pm.log 00:04:25.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721862288_collect-cpu-load.pm.log 00:04:26.883 23:04:49 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:04:26.883 23:04:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:26.883 23:04:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:26.883 23:04:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:26.883 23:04:49 -- spdk/autobuild.sh@16 -- $ date -u 00:04:26.883 Wed Jul 24 11:04:49 PM UTC 2024 00:04:26.883 23:04:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:26.883 v24.09-pre-224-g3c25cfe1d 00:04:26.883 23:04:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:26.883 23:04:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:26.883 23:04:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:26.883 23:04:49 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:26.883 23:04:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:26.883 23:04:49 -- common/autotest_common.sh@10 -- $ set +x 00:04:26.883 ************************************ 00:04:26.883 START TEST ubsan 00:04:26.883 ************************************ 00:04:26.883 using ubsan 00:04:26.883 23:04:49 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:04:26.883 00:04:26.883 real 0m0.000s 00:04:26.883 user 0m0.000s 00:04:26.883 sys 0m0.000s 00:04:26.883 23:04:49 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:26.883 23:04:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:26.883 ************************************ 00:04:26.883 END TEST ubsan 00:04:26.883 ************************************ 00:04:26.883 23:04:49 -- common/autotest_common.sh@1142 -- $ return 0 00:04:26.883 23:04:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:26.883 23:04:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:26.883 23:04:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:26.883 23:04:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:26.883 23:04:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:26.883 23:04:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:26.883 23:04:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:26.883 23:04:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:26.883 23:04:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:04:26.883 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:26.883 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:27.450 Using 'verbs' RDMA provider 00:04:43.254 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:55.452 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:55.452 Creating mk/config.mk...done. 00:04:55.452 Creating mk/cc.flags.mk...done. 00:04:55.452 Type 'make' to build. 00:04:55.452 23:05:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:55.452 23:05:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:04:55.452 23:05:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:55.452 23:05:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:55.452 ************************************ 00:04:55.452 START TEST make 00:04:55.452 ************************************ 00:04:55.452 23:05:16 make -- common/autotest_common.sh@1123 -- $ make -j10 00:04:55.452 make[1]: Nothing to be done for 'all'. 00:05:07.679 The Meson build system 00:05:07.679 Version: 1.3.1 00:05:07.679 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:07.679 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:07.679 Build type: native build 00:05:07.679 Program cat found: YES (/usr/bin/cat) 00:05:07.679 Project name: DPDK 00:05:07.679 Project version: 24.03.0 00:05:07.679 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:07.679 C linker for the host machine: cc ld.bfd 2.39-16 00:05:07.679 Host machine cpu family: x86_64 00:05:07.679 Host machine cpu: x86_64 00:05:07.679 Message: ## Building in Developer Mode ## 00:05:07.679 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:07.679 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:07.679 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:07.679 Program python3 found: YES (/usr/bin/python3) 00:05:07.679 Program cat found: YES (/usr/bin/cat) 00:05:07.679 Compiler for C supports arguments -march=native: YES 00:05:07.679 Checking for size of "void *" : 8 00:05:07.679 Checking for size of "void *" : 8 (cached) 00:05:07.679 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:05:07.679 Library m found: YES 00:05:07.679 Library numa found: YES 00:05:07.679 Has header "numaif.h" : YES 00:05:07.679 Library fdt found: NO 00:05:07.679 Library execinfo found: NO 00:05:07.679 Has header "execinfo.h" : YES 00:05:07.679 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:07.679 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:07.679 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:07.679 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:07.679 Run-time dependency openssl found: YES 3.0.9 00:05:07.679 Run-time dependency libpcap found: YES 1.10.4 00:05:07.679 Has header "pcap.h" with dependency libpcap: YES 00:05:07.679 Compiler for C supports arguments -Wcast-qual: YES 00:05:07.679 Compiler for C supports arguments -Wdeprecated: YES 00:05:07.679 Compiler for C supports arguments -Wformat: YES 00:05:07.679 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:07.679 Compiler for C supports arguments -Wformat-security: NO 00:05:07.679 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:07.679 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:07.679 Compiler for C supports arguments -Wnested-externs: YES 00:05:07.679 Compiler for C supports arguments -Wold-style-definition: YES 00:05:07.679 Compiler for C supports arguments -Wpointer-arith: YES 00:05:07.679 Compiler for C supports arguments -Wsign-compare: YES 00:05:07.679 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:07.679 Compiler for C supports arguments -Wundef: YES 00:05:07.679 Compiler for C supports arguments -Wwrite-strings: YES 00:05:07.679 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:07.679 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:07.679 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:07.679 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:07.679 Program objdump found: YES (/usr/bin/objdump) 00:05:07.679 Compiler for C supports arguments -mavx512f: YES 00:05:07.679 Checking if "AVX512 checking" compiles: YES 00:05:07.679 Fetching value of define "__SSE4_2__" : 1 00:05:07.679 Fetching value of define "__AES__" : 1 00:05:07.679 Fetching value of define "__AVX__" : 1 00:05:07.679 Fetching value of define "__AVX2__" : 1 00:05:07.679 Fetching value of define "__AVX512BW__" : (undefined) 00:05:07.679 Fetching value of define "__AVX512CD__" : (undefined) 00:05:07.679 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:07.679 Fetching value of define "__AVX512F__" : (undefined) 00:05:07.679 Fetching value of define "__AVX512VL__" : (undefined) 00:05:07.679 Fetching value of define "__PCLMUL__" : 1 00:05:07.679 Fetching value of define "__RDRND__" : 1 00:05:07.679 Fetching value of define "__RDSEED__" : 1 00:05:07.679 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:07.679 Fetching value of define "__znver1__" : (undefined) 00:05:07.679 Fetching value of define "__znver2__" : (undefined) 00:05:07.679 Fetching value of define "__znver3__" : (undefined) 00:05:07.679 Fetching value of define "__znver4__" : (undefined) 00:05:07.679 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:07.679 Message: lib/log: Defining dependency "log" 00:05:07.679 Message: lib/kvargs: Defining dependency "kvargs" 00:05:07.679 Message: lib/telemetry: Defining dependency "telemetry" 00:05:07.679 Checking for function "getentropy" : NO 00:05:07.679 Message: lib/eal: Defining dependency "eal" 00:05:07.679 Message: lib/ring: Defining dependency "ring" 00:05:07.679 Message: lib/rcu: Defining dependency "rcu" 00:05:07.679 Message: lib/mempool: Defining dependency "mempool" 00:05:07.679 Message: lib/mbuf: Defining dependency "mbuf" 00:05:07.679 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:07.679 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:07.679 Compiler for C supports arguments -mpclmul: YES 00:05:07.679 Compiler for C supports arguments -maes: YES 00:05:07.679 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:07.679 Compiler for C supports arguments -mavx512bw: YES 00:05:07.680 Compiler for C supports arguments -mavx512dq: YES 00:05:07.680 Compiler for C supports arguments -mavx512vl: YES 00:05:07.680 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:07.680 Compiler for C supports arguments -mavx2: YES 00:05:07.680 Compiler for C supports arguments -mavx: YES 00:05:07.680 Message: lib/net: Defining dependency "net" 00:05:07.680 Message: lib/meter: Defining dependency "meter" 00:05:07.680 Message: lib/ethdev: Defining dependency "ethdev" 00:05:07.680 Message: lib/pci: Defining dependency "pci" 00:05:07.680 Message: lib/cmdline: Defining dependency "cmdline" 00:05:07.680 Message: lib/hash: Defining dependency "hash" 00:05:07.680 Message: lib/timer: Defining dependency "timer" 00:05:07.680 Message: lib/compressdev: Defining dependency "compressdev" 00:05:07.680 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:07.680 Message: lib/dmadev: Defining dependency "dmadev" 00:05:07.680 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:07.680 Message: lib/power: Defining dependency "power" 00:05:07.680 Message: lib/reorder: Defining dependency "reorder" 00:05:07.680 Message: lib/security: Defining dependency "security" 00:05:07.680 Has header "linux/userfaultfd.h" : YES 00:05:07.680 Has header "linux/vduse.h" : YES 00:05:07.680 Message: lib/vhost: Defining dependency "vhost" 00:05:07.680 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:07.680 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:07.680 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:07.680 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:07.680 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:07.680 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:07.680 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:07.680 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:07.680 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:07.680 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:07.680 Program doxygen found: YES (/usr/bin/doxygen) 00:05:07.680 Configuring doxy-api-html.conf using configuration 00:05:07.680 Configuring doxy-api-man.conf using configuration 00:05:07.680 Program mandb found: YES (/usr/bin/mandb) 00:05:07.680 Program sphinx-build found: NO 00:05:07.680 Configuring rte_build_config.h using configuration 00:05:07.680 Message: 00:05:07.680 ================= 00:05:07.680 Applications Enabled 00:05:07.680 ================= 00:05:07.680 00:05:07.680 apps: 00:05:07.680 00:05:07.680 00:05:07.680 Message: 00:05:07.680 ================= 00:05:07.680 Libraries Enabled 00:05:07.680 ================= 00:05:07.680 00:05:07.680 libs: 00:05:07.680 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:07.680 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:07.680 cryptodev, dmadev, power, reorder, security, vhost, 00:05:07.680 00:05:07.680 Message: 00:05:07.680 =============== 00:05:07.680 Drivers Enabled 00:05:07.680 =============== 00:05:07.680 00:05:07.680 common: 00:05:07.680 00:05:07.680 bus: 00:05:07.680 pci, vdev, 00:05:07.680 mempool: 00:05:07.680 ring, 00:05:07.680 dma: 00:05:07.680 00:05:07.680 net: 00:05:07.680 00:05:07.680 crypto: 00:05:07.680 00:05:07.680 compress: 00:05:07.680 00:05:07.680 vdpa: 00:05:07.680 00:05:07.680 00:05:07.680 Message: 00:05:07.680 ================= 00:05:07.680 Content Skipped 00:05:07.680 ================= 00:05:07.680 00:05:07.680 apps: 00:05:07.680 dumpcap: explicitly disabled via build config 00:05:07.680 graph: explicitly disabled via build config 00:05:07.680 pdump: explicitly disabled via build config 00:05:07.680 proc-info: explicitly disabled via build config 00:05:07.680 test-acl: explicitly disabled via build config 00:05:07.680 test-bbdev: explicitly disabled via build config 00:05:07.680 test-cmdline: explicitly disabled via build config 00:05:07.680 test-compress-perf: explicitly disabled via build config 00:05:07.680 test-crypto-perf: explicitly disabled via build config 00:05:07.680 test-dma-perf: explicitly disabled via build config 00:05:07.680 test-eventdev: explicitly disabled via build config 00:05:07.680 test-fib: explicitly disabled via build config 00:05:07.680 test-flow-perf: explicitly disabled via build config 00:05:07.680 test-gpudev: explicitly disabled via build config 00:05:07.680 test-mldev: explicitly disabled via build config 00:05:07.680 test-pipeline: explicitly disabled via build config 00:05:07.680 test-pmd: explicitly disabled via build config 00:05:07.680 test-regex: explicitly disabled via build config 00:05:07.680 test-sad: explicitly disabled via build config 00:05:07.680 test-security-perf: explicitly disabled via build config 00:05:07.680 00:05:07.680 libs: 00:05:07.680 argparse: explicitly disabled via build config 00:05:07.680 metrics: explicitly disabled via build config 00:05:07.680 acl: explicitly disabled via build config 00:05:07.680 bbdev: explicitly disabled via build config 00:05:07.680 bitratestats: explicitly disabled via build config 00:05:07.680 bpf: explicitly disabled via build config 00:05:07.680 cfgfile: explicitly disabled via build config 00:05:07.680 distributor: explicitly disabled via build config 00:05:07.680 efd: explicitly disabled via build config 00:05:07.680 eventdev: explicitly disabled via build config 00:05:07.680 dispatcher: explicitly disabled via build config 00:05:07.680 gpudev: explicitly disabled via build config 00:05:07.680 gro: explicitly disabled via build config 00:05:07.680 gso: explicitly disabled via build config 00:05:07.680 ip_frag: explicitly disabled via build config 00:05:07.680 jobstats: explicitly disabled via build config 00:05:07.680 latencystats: explicitly disabled via build config 00:05:07.680 lpm: explicitly disabled via build config 00:05:07.680 member: explicitly disabled via build config 00:05:07.680 pcapng: explicitly disabled via build config 00:05:07.680 rawdev: explicitly disabled via build config 00:05:07.680 regexdev: explicitly disabled via build config 00:05:07.680 mldev: explicitly disabled via build config 00:05:07.680 rib: explicitly disabled via build config 00:05:07.680 sched: explicitly disabled via build config 00:05:07.680 stack: explicitly disabled via build config 00:05:07.680 ipsec: explicitly disabled via build config 00:05:07.680 pdcp: explicitly disabled via build config 00:05:07.680 fib: explicitly disabled via build config 00:05:07.680 port: explicitly disabled via build config 00:05:07.680 pdump: explicitly disabled via build config 00:05:07.680 table: explicitly disabled via build config 00:05:07.680 pipeline: explicitly disabled via build config 00:05:07.680 graph: explicitly disabled via build config 00:05:07.680 node: explicitly disabled via build config 00:05:07.680 00:05:07.680 drivers: 00:05:07.680 common/cpt: not in enabled drivers build config 00:05:07.680 common/dpaax: not in enabled drivers build config 00:05:07.680 common/iavf: not in enabled drivers build config 00:05:07.680 common/idpf: not in enabled drivers build config 00:05:07.680 common/ionic: not in enabled drivers build config 00:05:07.680 common/mvep: not in enabled drivers build config 00:05:07.680 common/octeontx: not in enabled drivers build config 00:05:07.680 bus/auxiliary: not in enabled drivers build config 00:05:07.680 bus/cdx: not in enabled drivers build config 00:05:07.680 bus/dpaa: not in enabled drivers build config 00:05:07.680 bus/fslmc: not in enabled drivers build config 00:05:07.680 bus/ifpga: not in enabled drivers build config 00:05:07.680 bus/platform: not in enabled drivers build config 00:05:07.680 bus/uacce: not in enabled drivers build config 00:05:07.680 bus/vmbus: not in enabled drivers build config 00:05:07.680 common/cnxk: not in enabled drivers build config 00:05:07.680 common/mlx5: not in enabled drivers build config 00:05:07.680 common/nfp: not in enabled drivers build config 00:05:07.680 common/nitrox: not in enabled drivers build config 00:05:07.680 common/qat: not in enabled drivers build config 00:05:07.680 common/sfc_efx: not in enabled drivers build config 00:05:07.680 mempool/bucket: not in enabled drivers build config 00:05:07.680 mempool/cnxk: not in enabled drivers build config 00:05:07.680 mempool/dpaa: not in enabled drivers build config 00:05:07.680 mempool/dpaa2: not in enabled drivers build config 00:05:07.680 mempool/octeontx: not in enabled drivers build config 00:05:07.680 mempool/stack: not in enabled drivers build config 00:05:07.680 dma/cnxk: not in enabled drivers build config 00:05:07.680 dma/dpaa: not in enabled drivers build config 00:05:07.680 dma/dpaa2: not in enabled drivers build config 00:05:07.680 dma/hisilicon: not in enabled drivers build config 00:05:07.680 dma/idxd: not in enabled drivers build config 00:05:07.680 dma/ioat: not in enabled drivers build config 00:05:07.680 dma/skeleton: not in enabled drivers build config 00:05:07.680 net/af_packet: not in enabled drivers build config 00:05:07.680 net/af_xdp: not in enabled drivers build config 00:05:07.680 net/ark: not in enabled drivers build config 00:05:07.680 net/atlantic: not in enabled drivers build config 00:05:07.680 net/avp: not in enabled drivers build config 00:05:07.680 net/axgbe: not in enabled drivers build config 00:05:07.680 net/bnx2x: not in enabled drivers build config 00:05:07.680 net/bnxt: not in enabled drivers build config 00:05:07.680 net/bonding: not in enabled drivers build config 00:05:07.680 net/cnxk: not in enabled drivers build config 00:05:07.680 net/cpfl: not in enabled drivers build config 00:05:07.680 net/cxgbe: not in enabled drivers build config 00:05:07.680 net/dpaa: not in enabled drivers build config 00:05:07.680 net/dpaa2: not in enabled drivers build config 00:05:07.680 net/e1000: not in enabled drivers build config 00:05:07.680 net/ena: not in enabled drivers build config 00:05:07.680 net/enetc: not in enabled drivers build config 00:05:07.680 net/enetfec: not in enabled drivers build config 00:05:07.680 net/enic: not in enabled drivers build config 00:05:07.680 net/failsafe: not in enabled drivers build config 00:05:07.680 net/fm10k: not in enabled drivers build config 00:05:07.681 net/gve: not in enabled drivers build config 00:05:07.681 net/hinic: not in enabled drivers build config 00:05:07.681 net/hns3: not in enabled drivers build config 00:05:07.681 net/i40e: not in enabled drivers build config 00:05:07.681 net/iavf: not in enabled drivers build config 00:05:07.681 net/ice: not in enabled drivers build config 00:05:07.681 net/idpf: not in enabled drivers build config 00:05:07.681 net/igc: not in enabled drivers build config 00:05:07.681 net/ionic: not in enabled drivers build config 00:05:07.681 net/ipn3ke: not in enabled drivers build config 00:05:07.681 net/ixgbe: not in enabled drivers build config 00:05:07.681 net/mana: not in enabled drivers build config 00:05:07.681 net/memif: not in enabled drivers build config 00:05:07.681 net/mlx4: not in enabled drivers build config 00:05:07.681 net/mlx5: not in enabled drivers build config 00:05:07.681 net/mvneta: not in enabled drivers build config 00:05:07.681 net/mvpp2: not in enabled drivers build config 00:05:07.681 net/netvsc: not in enabled drivers build config 00:05:07.681 net/nfb: not in enabled drivers build config 00:05:07.681 net/nfp: not in enabled drivers build config 00:05:07.681 net/ngbe: not in enabled drivers build config 00:05:07.681 net/null: not in enabled drivers build config 00:05:07.681 net/octeontx: not in enabled drivers build config 00:05:07.681 net/octeon_ep: not in enabled drivers build config 00:05:07.681 net/pcap: not in enabled drivers build config 00:05:07.681 net/pfe: not in enabled drivers build config 00:05:07.681 net/qede: not in enabled drivers build config 00:05:07.681 net/ring: not in enabled drivers build config 00:05:07.681 net/sfc: not in enabled drivers build config 00:05:07.681 net/softnic: not in enabled drivers build config 00:05:07.681 net/tap: not in enabled drivers build config 00:05:07.681 net/thunderx: not in enabled drivers build config 00:05:07.681 net/txgbe: not in enabled drivers build config 00:05:07.681 net/vdev_netvsc: not in enabled drivers build config 00:05:07.681 net/vhost: not in enabled drivers build config 00:05:07.681 net/virtio: not in enabled drivers build config 00:05:07.681 net/vmxnet3: not in enabled drivers build config 00:05:07.681 raw/*: missing internal dependency, "rawdev" 00:05:07.681 crypto/armv8: not in enabled drivers build config 00:05:07.681 crypto/bcmfs: not in enabled drivers build config 00:05:07.681 crypto/caam_jr: not in enabled drivers build config 00:05:07.681 crypto/ccp: not in enabled drivers build config 00:05:07.681 crypto/cnxk: not in enabled drivers build config 00:05:07.681 crypto/dpaa_sec: not in enabled drivers build config 00:05:07.681 crypto/dpaa2_sec: not in enabled drivers build config 00:05:07.681 crypto/ipsec_mb: not in enabled drivers build config 00:05:07.681 crypto/mlx5: not in enabled drivers build config 00:05:07.681 crypto/mvsam: not in enabled drivers build config 00:05:07.681 crypto/nitrox: not in enabled drivers build config 00:05:07.681 crypto/null: not in enabled drivers build config 00:05:07.681 crypto/octeontx: not in enabled drivers build config 00:05:07.681 crypto/openssl: not in enabled drivers build config 00:05:07.681 crypto/scheduler: not in enabled drivers build config 00:05:07.681 crypto/uadk: not in enabled drivers build config 00:05:07.681 crypto/virtio: not in enabled drivers build config 00:05:07.681 compress/isal: not in enabled drivers build config 00:05:07.681 compress/mlx5: not in enabled drivers build config 00:05:07.681 compress/nitrox: not in enabled drivers build config 00:05:07.681 compress/octeontx: not in enabled drivers build config 00:05:07.681 compress/zlib: not in enabled drivers build config 00:05:07.681 regex/*: missing internal dependency, "regexdev" 00:05:07.681 ml/*: missing internal dependency, "mldev" 00:05:07.681 vdpa/ifc: not in enabled drivers build config 00:05:07.681 vdpa/mlx5: not in enabled drivers build config 00:05:07.681 vdpa/nfp: not in enabled drivers build config 00:05:07.681 vdpa/sfc: not in enabled drivers build config 00:05:07.681 event/*: missing internal dependency, "eventdev" 00:05:07.681 baseband/*: missing internal dependency, "bbdev" 00:05:07.681 gpu/*: missing internal dependency, "gpudev" 00:05:07.681 00:05:07.681 00:05:07.681 Build targets in project: 85 00:05:07.681 00:05:07.681 DPDK 24.03.0 00:05:07.681 00:05:07.681 User defined options 00:05:07.681 buildtype : debug 00:05:07.681 default_library : shared 00:05:07.681 libdir : lib 00:05:07.681 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:07.681 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:07.681 c_link_args : 00:05:07.681 cpu_instruction_set: native 00:05:07.681 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:07.681 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:07.681 enable_docs : false 00:05:07.681 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:07.681 enable_kmods : false 00:05:07.681 max_lcores : 128 00:05:07.681 tests : false 00:05:07.681 00:05:07.681 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:07.681 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:07.681 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:07.681 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:07.681 [3/268] Linking static target lib/librte_kvargs.a 00:05:07.681 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:07.681 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:07.681 [6/268] Linking static target lib/librte_log.a 00:05:07.681 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.939 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:07.939 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:07.939 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:07.939 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:07.939 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:07.939 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:08.197 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:08.197 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:08.197 [16/268] Linking static target lib/librte_telemetry.a 00:05:08.197 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:08.197 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.455 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:08.455 [20/268] Linking target lib/librte_log.so.24.1 00:05:08.713 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:08.713 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:08.713 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:08.713 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:08.972 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:08.972 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:08.972 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:08.972 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:08.972 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:08.972 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:08.972 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.972 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:08.972 [33/268] Linking target lib/librte_telemetry.so.24.1 00:05:09.231 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:09.231 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:09.489 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:09.489 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:09.489 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:09.747 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:09.747 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:09.747 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:09.747 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:09.747 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:09.747 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:09.747 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:10.005 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:10.262 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:10.262 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:10.262 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:10.262 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:10.520 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:10.778 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:10.778 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:10.778 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:10.778 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:10.778 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:11.045 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:11.045 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:11.302 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:11.302 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:11.302 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:11.302 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:11.560 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:11.817 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:11.817 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:11.817 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:11.817 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:11.817 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:12.075 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:12.075 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:12.075 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:12.332 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:12.332 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:12.332 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:12.332 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:12.590 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:12.590 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:12.590 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:12.847 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:12.847 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:12.847 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:13.105 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:13.105 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:13.105 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:13.105 [85/268] Linking static target lib/librte_ring.a 00:05:13.105 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:13.363 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:13.363 [88/268] Linking static target lib/librte_rcu.a 00:05:13.363 [89/268] Linking static target lib/librte_eal.a 00:05:13.363 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:13.363 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:13.621 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:13.621 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.621 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:13.621 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:13.621 [96/268] Linking static target lib/librte_mempool.a 00:05:13.621 [97/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.877 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:13.877 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:13.877 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:13.877 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:14.135 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:14.393 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:14.393 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:14.393 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:14.393 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:14.393 [107/268] Linking static target lib/librte_mbuf.a 00:05:14.393 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:14.393 [109/268] Linking static target lib/librte_net.a 00:05:14.393 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:14.393 [111/268] Linking static target lib/librte_meter.a 00:05:14.959 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:14.959 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.959 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.959 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:14.959 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:14.959 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.959 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:15.523 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.523 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:15.523 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:15.523 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:15.798 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:15.798 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:16.090 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:16.090 [126/268] Linking static target lib/librte_pci.a 00:05:16.090 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:16.090 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:16.090 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:16.090 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:16.090 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:16.090 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:16.348 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:16.348 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:16.348 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.348 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:16.348 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:16.348 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:16.348 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:16.605 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:16.605 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:16.605 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:16.605 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:16.605 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:16.605 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:16.605 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:16.863 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:16.863 [148/268] Linking static target lib/librte_cmdline.a 00:05:17.120 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:17.120 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:17.120 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:17.120 [152/268] Linking static target lib/librte_timer.a 00:05:17.378 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:17.378 [154/268] Linking static target lib/librte_ethdev.a 00:05:17.635 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:17.635 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:17.635 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:17.635 [158/268] Linking static target lib/librte_hash.a 00:05:17.635 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:17.635 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:17.893 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:17.893 [162/268] Linking static target lib/librte_compressdev.a 00:05:17.893 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:17.893 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:18.150 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:18.407 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:18.408 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:18.408 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:18.408 [169/268] Linking static target lib/librte_dmadev.a 00:05:18.408 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.665 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:18.665 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:18.665 [173/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.665 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:18.665 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.665 [176/268] Linking static target lib/librte_cryptodev.a 00:05:18.923 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:18.923 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:19.181 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:19.181 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:19.181 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:19.181 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:19.181 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.438 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:19.438 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:19.438 [186/268] Linking static target lib/librte_power.a 00:05:20.004 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:20.004 [188/268] Linking static target lib/librte_security.a 00:05:20.004 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:20.004 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:20.004 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:20.261 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:20.261 [193/268] Linking static target lib/librte_reorder.a 00:05:20.519 [194/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.519 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:20.519 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.519 [197/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.776 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:20.776 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:20.776 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:21.033 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:21.033 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:21.033 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:21.294 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:21.294 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:21.294 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:21.551 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:21.551 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:21.552 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:21.552 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:21.809 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:21.809 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:21.809 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:21.809 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:21.809 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:21.809 [216/268] Linking static target drivers/librte_bus_vdev.a 00:05:21.809 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:21.809 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:21.809 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:21.809 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:22.066 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:22.066 [222/268] Linking static target drivers/librte_bus_pci.a 00:05:22.066 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:22.066 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:22.066 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:22.067 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.067 [227/268] Linking static target drivers/librte_mempool_ring.a 00:05:22.631 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.889 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:22.889 [230/268] Linking static target lib/librte_vhost.a 00:05:23.822 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.080 [232/268] Linking target lib/librte_eal.so.24.1 00:05:24.080 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:24.337 [234/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.337 [235/268] Linking target lib/librte_pci.so.24.1 00:05:24.337 [236/268] Linking target lib/librte_ring.so.24.1 00:05:24.337 [237/268] Linking target lib/librte_meter.so.24.1 00:05:24.337 [238/268] Linking target lib/librte_timer.so.24.1 00:05:24.337 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:24.337 [240/268] Linking target lib/librte_dmadev.so.24.1 00:05:24.337 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:24.337 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:24.337 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:24.337 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:24.337 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:24.337 [246/268] Linking target lib/librte_mempool.so.24.1 00:05:24.337 [247/268] Linking target lib/librte_rcu.so.24.1 00:05:24.337 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:24.595 [249/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.595 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:24.595 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:24.595 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:24.595 [253/268] Linking target lib/librte_mbuf.so.24.1 00:05:24.854 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:24.854 [255/268] Linking target lib/librte_net.so.24.1 00:05:24.854 [256/268] Linking target lib/librte_compressdev.so.24.1 00:05:24.854 [257/268] Linking target lib/librte_reorder.so.24.1 00:05:24.854 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:05:25.111 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:25.111 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:25.111 [261/268] Linking target lib/librte_cmdline.so.24.1 00:05:25.111 [262/268] Linking target lib/librte_security.so.24.1 00:05:25.111 [263/268] Linking target lib/librte_hash.so.24.1 00:05:25.111 [264/268] Linking target lib/librte_ethdev.so.24.1 00:05:25.368 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:25.368 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:25.368 [267/268] Linking target lib/librte_power.so.24.1 00:05:25.368 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:25.368 INFO: autodetecting backend as ninja 00:05:25.368 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:26.764 CC lib/ut/ut.o 00:05:26.764 CC lib/log/log.o 00:05:26.764 CC lib/log/log_flags.o 00:05:26.764 CC lib/log/log_deprecated.o 00:05:26.764 CC lib/ut_mock/mock.o 00:05:26.764 LIB libspdk_ut.a 00:05:26.764 SO libspdk_ut.so.2.0 00:05:26.764 LIB libspdk_log.a 00:05:26.764 LIB libspdk_ut_mock.a 00:05:27.021 SO libspdk_log.so.7.0 00:05:27.021 SYMLINK libspdk_ut.so 00:05:27.021 SO libspdk_ut_mock.so.6.0 00:05:27.021 SYMLINK libspdk_ut_mock.so 00:05:27.021 SYMLINK libspdk_log.so 00:05:27.278 CC lib/ioat/ioat.o 00:05:27.278 CC lib/util/base64.o 00:05:27.278 CC lib/util/bit_array.o 00:05:27.278 CC lib/util/crc16.o 00:05:27.278 CC lib/util/cpuset.o 00:05:27.278 CC lib/util/crc32.o 00:05:27.278 CC lib/dma/dma.o 00:05:27.278 CC lib/util/crc32c.o 00:05:27.278 CXX lib/trace_parser/trace.o 00:05:27.278 CC lib/vfio_user/host/vfio_user_pci.o 00:05:27.278 CC lib/util/crc32_ieee.o 00:05:27.278 CC lib/util/crc64.o 00:05:27.278 CC lib/util/dif.o 00:05:27.278 CC lib/util/fd.o 00:05:27.536 CC lib/vfio_user/host/vfio_user.o 00:05:27.536 LIB libspdk_dma.a 00:05:27.536 SO libspdk_dma.so.4.0 00:05:27.536 CC lib/util/file.o 00:05:27.536 CC lib/util/hexlify.o 00:05:27.536 LIB libspdk_ioat.a 00:05:27.536 SO libspdk_ioat.so.7.0 00:05:27.536 SYMLINK libspdk_dma.so 00:05:27.536 CC lib/util/iov.o 00:05:27.536 CC lib/util/pipe.o 00:05:27.536 CC lib/util/math.o 00:05:27.536 SYMLINK libspdk_ioat.so 00:05:27.536 CC lib/util/strerror_tls.o 00:05:27.536 CC lib/util/string.o 00:05:27.536 CC lib/util/uuid.o 00:05:27.536 LIB libspdk_vfio_user.a 00:05:27.536 CC lib/util/fd_group.o 00:05:27.793 SO libspdk_vfio_user.so.5.0 00:05:27.793 CC lib/util/xor.o 00:05:27.793 SYMLINK libspdk_vfio_user.so 00:05:27.793 CC lib/util/zipf.o 00:05:28.051 LIB libspdk_util.a 00:05:28.051 SO libspdk_util.so.9.1 00:05:28.051 LIB libspdk_trace_parser.a 00:05:28.308 SO libspdk_trace_parser.so.5.0 00:05:28.308 SYMLINK libspdk_util.so 00:05:28.308 SYMLINK libspdk_trace_parser.so 00:05:28.308 CC lib/rdma_provider/common.o 00:05:28.308 CC lib/env_dpdk/env.o 00:05:28.308 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:28.308 CC lib/env_dpdk/memory.o 00:05:28.308 CC lib/env_dpdk/pci.o 00:05:28.308 CC lib/vmd/vmd.o 00:05:28.308 CC lib/idxd/idxd.o 00:05:28.308 CC lib/rdma_utils/rdma_utils.o 00:05:28.308 CC lib/json/json_parse.o 00:05:28.308 CC lib/conf/conf.o 00:05:28.566 CC lib/env_dpdk/init.o 00:05:28.566 LIB libspdk_rdma_provider.a 00:05:28.566 SO libspdk_rdma_provider.so.6.0 00:05:28.566 LIB libspdk_conf.a 00:05:28.824 LIB libspdk_rdma_utils.a 00:05:28.824 SO libspdk_conf.so.6.0 00:05:28.824 CC lib/json/json_util.o 00:05:28.824 SO libspdk_rdma_utils.so.1.0 00:05:28.824 SYMLINK libspdk_rdma_provider.so 00:05:28.824 CC lib/json/json_write.o 00:05:28.824 SYMLINK libspdk_conf.so 00:05:28.824 CC lib/env_dpdk/threads.o 00:05:28.824 CC lib/env_dpdk/pci_ioat.o 00:05:28.824 SYMLINK libspdk_rdma_utils.so 00:05:28.824 CC lib/vmd/led.o 00:05:28.824 CC lib/env_dpdk/pci_virtio.o 00:05:29.082 CC lib/env_dpdk/pci_vmd.o 00:05:29.082 CC lib/idxd/idxd_user.o 00:05:29.082 CC lib/env_dpdk/pci_idxd.o 00:05:29.082 CC lib/env_dpdk/pci_event.o 00:05:29.082 CC lib/env_dpdk/sigbus_handler.o 00:05:29.082 LIB libspdk_json.a 00:05:29.082 LIB libspdk_vmd.a 00:05:29.082 CC lib/env_dpdk/pci_dpdk.o 00:05:29.082 SO libspdk_json.so.6.0 00:05:29.082 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:29.082 SO libspdk_vmd.so.6.0 00:05:29.082 CC lib/idxd/idxd_kernel.o 00:05:29.082 SYMLINK libspdk_vmd.so 00:05:29.082 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:29.082 SYMLINK libspdk_json.so 00:05:29.339 LIB libspdk_idxd.a 00:05:29.339 SO libspdk_idxd.so.12.0 00:05:29.339 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:29.339 CC lib/jsonrpc/jsonrpc_server.o 00:05:29.339 CC lib/jsonrpc/jsonrpc_client.o 00:05:29.339 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:29.339 SYMLINK libspdk_idxd.so 00:05:29.905 LIB libspdk_jsonrpc.a 00:05:29.905 SO libspdk_jsonrpc.so.6.0 00:05:29.905 SYMLINK libspdk_jsonrpc.so 00:05:29.905 LIB libspdk_env_dpdk.a 00:05:29.905 SO libspdk_env_dpdk.so.14.1 00:05:30.193 CC lib/rpc/rpc.o 00:05:30.193 SYMLINK libspdk_env_dpdk.so 00:05:30.454 LIB libspdk_rpc.a 00:05:30.454 SO libspdk_rpc.so.6.0 00:05:30.454 SYMLINK libspdk_rpc.so 00:05:30.711 CC lib/notify/notify_rpc.o 00:05:30.711 CC lib/notify/notify.o 00:05:30.711 CC lib/trace/trace.o 00:05:30.711 CC lib/trace/trace_flags.o 00:05:30.711 CC lib/trace/trace_rpc.o 00:05:30.711 CC lib/keyring/keyring.o 00:05:30.711 CC lib/keyring/keyring_rpc.o 00:05:30.969 LIB libspdk_notify.a 00:05:30.969 SO libspdk_notify.so.6.0 00:05:30.969 LIB libspdk_trace.a 00:05:30.969 LIB libspdk_keyring.a 00:05:30.969 SYMLINK libspdk_notify.so 00:05:30.969 SO libspdk_keyring.so.1.0 00:05:30.969 SO libspdk_trace.so.10.0 00:05:31.226 SYMLINK libspdk_keyring.so 00:05:31.226 SYMLINK libspdk_trace.so 00:05:31.482 CC lib/sock/sock.o 00:05:31.482 CC lib/sock/sock_rpc.o 00:05:31.482 CC lib/thread/thread.o 00:05:31.482 CC lib/thread/iobuf.o 00:05:31.739 LIB libspdk_sock.a 00:05:31.739 SO libspdk_sock.so.10.0 00:05:31.997 SYMLINK libspdk_sock.so 00:05:32.255 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:32.255 CC lib/nvme/nvme_ctrlr.o 00:05:32.255 CC lib/nvme/nvme_fabric.o 00:05:32.255 CC lib/nvme/nvme_ns_cmd.o 00:05:32.255 CC lib/nvme/nvme_ns.o 00:05:32.255 CC lib/nvme/nvme_pcie.o 00:05:32.255 CC lib/nvme/nvme_pcie_common.o 00:05:32.255 CC lib/nvme/nvme_qpair.o 00:05:32.255 CC lib/nvme/nvme.o 00:05:33.188 LIB libspdk_thread.a 00:05:33.188 SO libspdk_thread.so.10.1 00:05:33.188 CC lib/nvme/nvme_quirks.o 00:05:33.188 SYMLINK libspdk_thread.so 00:05:33.188 CC lib/nvme/nvme_transport.o 00:05:33.188 CC lib/nvme/nvme_discovery.o 00:05:33.188 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:33.188 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:33.188 CC lib/nvme/nvme_tcp.o 00:05:33.188 CC lib/nvme/nvme_opal.o 00:05:33.188 CC lib/nvme/nvme_io_msg.o 00:05:33.446 CC lib/accel/accel.o 00:05:33.704 CC lib/nvme/nvme_poll_group.o 00:05:33.704 CC lib/nvme/nvme_zns.o 00:05:33.962 CC lib/nvme/nvme_stubs.o 00:05:33.962 CC lib/nvme/nvme_auth.o 00:05:33.962 CC lib/blob/blobstore.o 00:05:34.220 CC lib/init/json_config.o 00:05:34.220 CC lib/init/subsystem.o 00:05:34.478 CC lib/blob/request.o 00:05:34.478 CC lib/blob/zeroes.o 00:05:34.478 CC lib/init/subsystem_rpc.o 00:05:34.478 CC lib/init/rpc.o 00:05:34.478 CC lib/blob/blob_bs_dev.o 00:05:34.478 CC lib/virtio/virtio.o 00:05:34.478 CC lib/accel/accel_rpc.o 00:05:34.735 CC lib/nvme/nvme_cuse.o 00:05:34.735 CC lib/nvme/nvme_rdma.o 00:05:34.735 LIB libspdk_init.a 00:05:34.735 SO libspdk_init.so.5.0 00:05:34.735 CC lib/virtio/virtio_vhost_user.o 00:05:34.735 CC lib/accel/accel_sw.o 00:05:34.735 SYMLINK libspdk_init.so 00:05:34.735 CC lib/virtio/virtio_vfio_user.o 00:05:34.735 CC lib/virtio/virtio_pci.o 00:05:34.993 CC lib/event/reactor.o 00:05:34.993 CC lib/event/app.o 00:05:34.993 CC lib/event/log_rpc.o 00:05:34.993 CC lib/event/app_rpc.o 00:05:34.993 LIB libspdk_accel.a 00:05:34.993 CC lib/event/scheduler_static.o 00:05:35.251 SO libspdk_accel.so.15.1 00:05:35.251 LIB libspdk_virtio.a 00:05:35.251 SO libspdk_virtio.so.7.0 00:05:35.251 SYMLINK libspdk_accel.so 00:05:35.251 SYMLINK libspdk_virtio.so 00:05:35.251 LIB libspdk_event.a 00:05:35.508 SO libspdk_event.so.14.0 00:05:35.508 CC lib/bdev/bdev_rpc.o 00:05:35.508 CC lib/bdev/bdev.o 00:05:35.508 CC lib/bdev/bdev_zone.o 00:05:35.508 CC lib/bdev/part.o 00:05:35.508 CC lib/bdev/scsi_nvme.o 00:05:35.508 SYMLINK libspdk_event.so 00:05:36.073 LIB libspdk_nvme.a 00:05:36.331 SO libspdk_nvme.so.13.1 00:05:36.902 SYMLINK libspdk_nvme.so 00:05:37.159 LIB libspdk_blob.a 00:05:37.159 SO libspdk_blob.so.11.0 00:05:37.159 SYMLINK libspdk_blob.so 00:05:37.417 CC lib/blobfs/blobfs.o 00:05:37.417 CC lib/blobfs/tree.o 00:05:37.417 CC lib/lvol/lvol.o 00:05:37.983 LIB libspdk_bdev.a 00:05:38.241 SO libspdk_bdev.so.15.1 00:05:38.241 SYMLINK libspdk_bdev.so 00:05:38.241 LIB libspdk_blobfs.a 00:05:38.500 SO libspdk_blobfs.so.10.0 00:05:38.500 LIB libspdk_lvol.a 00:05:38.500 SYMLINK libspdk_blobfs.so 00:05:38.500 CC lib/nvmf/ctrlr.o 00:05:38.500 CC lib/nvmf/ctrlr_bdev.o 00:05:38.500 CC lib/nvmf/ctrlr_discovery.o 00:05:38.500 CC lib/scsi/dev.o 00:05:38.500 CC lib/nvmf/subsystem.o 00:05:38.500 CC lib/nvmf/nvmf.o 00:05:38.500 SO libspdk_lvol.so.10.0 00:05:38.500 CC lib/ublk/ublk.o 00:05:38.500 CC lib/nbd/nbd.o 00:05:38.500 CC lib/ftl/ftl_core.o 00:05:38.500 SYMLINK libspdk_lvol.so 00:05:38.500 CC lib/nbd/nbd_rpc.o 00:05:38.759 CC lib/scsi/lun.o 00:05:38.759 CC lib/scsi/port.o 00:05:39.016 CC lib/ftl/ftl_init.o 00:05:39.016 LIB libspdk_nbd.a 00:05:39.016 CC lib/ftl/ftl_layout.o 00:05:39.016 SO libspdk_nbd.so.7.0 00:05:39.016 CC lib/scsi/scsi.o 00:05:39.016 CC lib/scsi/scsi_bdev.o 00:05:39.016 SYMLINK libspdk_nbd.so 00:05:39.016 CC lib/scsi/scsi_pr.o 00:05:39.016 CC lib/scsi/scsi_rpc.o 00:05:39.274 CC lib/scsi/task.o 00:05:39.274 CC lib/ublk/ublk_rpc.o 00:05:39.274 CC lib/nvmf/nvmf_rpc.o 00:05:39.274 CC lib/nvmf/transport.o 00:05:39.274 LIB libspdk_ublk.a 00:05:39.274 CC lib/nvmf/tcp.o 00:05:39.274 CC lib/ftl/ftl_debug.o 00:05:39.533 SO libspdk_ublk.so.3.0 00:05:39.533 CC lib/nvmf/stubs.o 00:05:39.533 CC lib/nvmf/mdns_server.o 00:05:39.533 SYMLINK libspdk_ublk.so 00:05:39.533 CC lib/nvmf/rdma.o 00:05:39.533 LIB libspdk_scsi.a 00:05:39.533 SO libspdk_scsi.so.9.0 00:05:39.792 CC lib/ftl/ftl_io.o 00:05:39.792 SYMLINK libspdk_scsi.so 00:05:39.792 CC lib/nvmf/auth.o 00:05:39.792 CC lib/ftl/ftl_sb.o 00:05:40.051 CC lib/ftl/ftl_l2p.o 00:05:40.051 CC lib/ftl/ftl_l2p_flat.o 00:05:40.051 CC lib/iscsi/conn.o 00:05:40.051 CC lib/vhost/vhost.o 00:05:40.051 CC lib/vhost/vhost_rpc.o 00:05:40.051 CC lib/vhost/vhost_scsi.o 00:05:40.051 CC lib/vhost/vhost_blk.o 00:05:40.309 CC lib/ftl/ftl_nv_cache.o 00:05:40.309 CC lib/iscsi/init_grp.o 00:05:40.567 CC lib/iscsi/iscsi.o 00:05:40.567 CC lib/iscsi/md5.o 00:05:40.567 CC lib/iscsi/param.o 00:05:40.825 CC lib/ftl/ftl_band.o 00:05:40.825 CC lib/ftl/ftl_band_ops.o 00:05:40.825 CC lib/ftl/ftl_writer.o 00:05:40.825 CC lib/ftl/ftl_rq.o 00:05:41.083 CC lib/iscsi/portal_grp.o 00:05:41.083 CC lib/vhost/rte_vhost_user.o 00:05:41.083 CC lib/ftl/ftl_reloc.o 00:05:41.083 CC lib/ftl/ftl_l2p_cache.o 00:05:41.083 CC lib/ftl/ftl_p2l.o 00:05:41.083 CC lib/iscsi/tgt_node.o 00:05:41.083 CC lib/iscsi/iscsi_subsystem.o 00:05:41.339 CC lib/iscsi/iscsi_rpc.o 00:05:41.339 CC lib/iscsi/task.o 00:05:41.596 CC lib/ftl/mngt/ftl_mngt.o 00:05:41.596 LIB libspdk_nvmf.a 00:05:41.596 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:41.596 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:41.596 SO libspdk_nvmf.so.19.0 00:05:41.596 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:41.596 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:41.596 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:41.596 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:41.864 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:41.864 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:41.864 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:41.864 SYMLINK libspdk_nvmf.so 00:05:41.864 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:41.864 LIB libspdk_iscsi.a 00:05:41.864 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:41.864 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:41.864 SO libspdk_iscsi.so.8.0 00:05:41.864 CC lib/ftl/utils/ftl_conf.o 00:05:41.864 CC lib/ftl/utils/ftl_md.o 00:05:42.123 CC lib/ftl/utils/ftl_mempool.o 00:05:42.123 CC lib/ftl/utils/ftl_bitmap.o 00:05:42.123 CC lib/ftl/utils/ftl_property.o 00:05:42.123 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:42.123 SYMLINK libspdk_iscsi.so 00:05:42.123 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:42.123 LIB libspdk_vhost.a 00:05:42.123 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:42.123 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:42.123 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:42.123 SO libspdk_vhost.so.8.0 00:05:42.123 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:42.395 SYMLINK libspdk_vhost.so 00:05:42.395 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:42.395 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:42.395 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:42.395 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:42.395 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:42.395 CC lib/ftl/base/ftl_base_dev.o 00:05:42.395 CC lib/ftl/base/ftl_base_bdev.o 00:05:42.395 CC lib/ftl/ftl_trace.o 00:05:42.654 LIB libspdk_ftl.a 00:05:42.912 SO libspdk_ftl.so.9.0 00:05:43.476 SYMLINK libspdk_ftl.so 00:05:43.734 CC module/env_dpdk/env_dpdk_rpc.o 00:05:43.734 CC module/accel/dsa/accel_dsa.o 00:05:43.734 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:43.734 CC module/keyring/file/keyring.o 00:05:43.734 CC module/accel/iaa/accel_iaa.o 00:05:43.734 CC module/sock/posix/posix.o 00:05:43.734 CC module/accel/error/accel_error.o 00:05:43.734 CC module/accel/ioat/accel_ioat.o 00:05:43.734 CC module/sock/uring/uring.o 00:05:43.992 CC module/blob/bdev/blob_bdev.o 00:05:43.992 LIB libspdk_env_dpdk_rpc.a 00:05:43.992 SO libspdk_env_dpdk_rpc.so.6.0 00:05:43.992 CC module/keyring/file/keyring_rpc.o 00:05:43.992 SYMLINK libspdk_env_dpdk_rpc.so 00:05:43.992 CC module/accel/error/accel_error_rpc.o 00:05:43.992 LIB libspdk_scheduler_dynamic.a 00:05:43.992 CC module/accel/ioat/accel_ioat_rpc.o 00:05:43.992 SO libspdk_scheduler_dynamic.so.4.0 00:05:43.992 CC module/accel/dsa/accel_dsa_rpc.o 00:05:43.992 CC module/accel/iaa/accel_iaa_rpc.o 00:05:43.992 LIB libspdk_keyring_file.a 00:05:43.992 SYMLINK libspdk_scheduler_dynamic.so 00:05:43.992 LIB libspdk_accel_error.a 00:05:44.251 SO libspdk_keyring_file.so.1.0 00:05:44.251 SO libspdk_accel_error.so.2.0 00:05:44.251 LIB libspdk_blob_bdev.a 00:05:44.251 LIB libspdk_accel_ioat.a 00:05:44.251 SO libspdk_blob_bdev.so.11.0 00:05:44.251 LIB libspdk_accel_dsa.a 00:05:44.251 SO libspdk_accel_ioat.so.6.0 00:05:44.251 SYMLINK libspdk_accel_error.so 00:05:44.251 SYMLINK libspdk_keyring_file.so 00:05:44.251 LIB libspdk_accel_iaa.a 00:05:44.251 CC module/keyring/linux/keyring.o 00:05:44.251 SO libspdk_accel_dsa.so.5.0 00:05:44.251 SYMLINK libspdk_blob_bdev.so 00:05:44.251 SYMLINK libspdk_accel_ioat.so 00:05:44.251 SO libspdk_accel_iaa.so.3.0 00:05:44.251 CC module/keyring/linux/keyring_rpc.o 00:05:44.251 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:44.251 SYMLINK libspdk_accel_dsa.so 00:05:44.251 SYMLINK libspdk_accel_iaa.so 00:05:44.509 CC module/scheduler/gscheduler/gscheduler.o 00:05:44.509 LIB libspdk_keyring_linux.a 00:05:44.509 SO libspdk_keyring_linux.so.1.0 00:05:44.509 LIB libspdk_scheduler_dpdk_governor.a 00:05:44.509 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:44.509 CC module/bdev/error/vbdev_error.o 00:05:44.509 CC module/bdev/delay/vbdev_delay.o 00:05:44.509 CC module/bdev/gpt/gpt.o 00:05:44.509 SYMLINK libspdk_keyring_linux.so 00:05:44.509 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:44.509 CC module/blobfs/bdev/blobfs_bdev.o 00:05:44.509 LIB libspdk_sock_uring.a 00:05:44.509 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:44.509 CC module/bdev/gpt/vbdev_gpt.o 00:05:44.509 CC module/bdev/lvol/vbdev_lvol.o 00:05:44.509 LIB libspdk_scheduler_gscheduler.a 00:05:44.509 LIB libspdk_sock_posix.a 00:05:44.509 SO libspdk_sock_uring.so.5.0 00:05:44.509 SO libspdk_scheduler_gscheduler.so.4.0 00:05:44.767 SO libspdk_sock_posix.so.6.0 00:05:44.767 SYMLINK libspdk_sock_uring.so 00:05:44.767 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:44.767 SYMLINK libspdk_scheduler_gscheduler.so 00:05:44.767 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:44.767 CC module/bdev/error/vbdev_error_rpc.o 00:05:44.767 SYMLINK libspdk_sock_posix.so 00:05:44.767 LIB libspdk_blobfs_bdev.a 00:05:44.767 LIB libspdk_bdev_gpt.a 00:05:44.767 CC module/bdev/malloc/bdev_malloc.o 00:05:45.026 SO libspdk_blobfs_bdev.so.6.0 00:05:45.026 LIB libspdk_bdev_error.a 00:05:45.026 SO libspdk_bdev_gpt.so.6.0 00:05:45.026 CC module/bdev/null/bdev_null.o 00:05:45.026 CC module/bdev/nvme/bdev_nvme.o 00:05:45.026 SO libspdk_bdev_error.so.6.0 00:05:45.026 LIB libspdk_bdev_delay.a 00:05:45.026 SYMLINK libspdk_blobfs_bdev.so 00:05:45.026 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:45.026 CC module/bdev/passthru/vbdev_passthru.o 00:05:45.026 SYMLINK libspdk_bdev_gpt.so 00:05:45.026 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:45.026 SO libspdk_bdev_delay.so.6.0 00:05:45.026 SYMLINK libspdk_bdev_error.so 00:05:45.026 SYMLINK libspdk_bdev_delay.so 00:05:45.026 LIB libspdk_bdev_lvol.a 00:05:45.284 SO libspdk_bdev_lvol.so.6.0 00:05:45.284 SYMLINK libspdk_bdev_lvol.so 00:05:45.284 CC module/bdev/null/bdev_null_rpc.o 00:05:45.284 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:45.284 CC module/bdev/raid/bdev_raid.o 00:05:45.284 LIB libspdk_bdev_passthru.a 00:05:45.284 CC module/bdev/split/vbdev_split.o 00:05:45.284 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:45.284 SO libspdk_bdev_passthru.so.6.0 00:05:45.284 CC module/bdev/uring/bdev_uring.o 00:05:45.543 SYMLINK libspdk_bdev_passthru.so 00:05:45.543 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:45.543 LIB libspdk_bdev_null.a 00:05:45.543 LIB libspdk_bdev_malloc.a 00:05:45.543 CC module/bdev/aio/bdev_aio.o 00:05:45.543 SO libspdk_bdev_null.so.6.0 00:05:45.543 SO libspdk_bdev_malloc.so.6.0 00:05:45.543 SYMLINK libspdk_bdev_null.so 00:05:45.543 CC module/bdev/split/vbdev_split_rpc.o 00:05:45.543 CC module/bdev/aio/bdev_aio_rpc.o 00:05:45.543 SYMLINK libspdk_bdev_malloc.so 00:05:45.543 CC module/bdev/uring/bdev_uring_rpc.o 00:05:45.543 CC module/bdev/raid/bdev_raid_rpc.o 00:05:45.801 LIB libspdk_bdev_zone_block.a 00:05:45.801 SO libspdk_bdev_zone_block.so.6.0 00:05:45.801 LIB libspdk_bdev_split.a 00:05:45.801 CC module/bdev/ftl/bdev_ftl.o 00:05:45.801 CC module/bdev/nvme/nvme_rpc.o 00:05:45.801 SO libspdk_bdev_split.so.6.0 00:05:45.801 LIB libspdk_bdev_uring.a 00:05:45.801 SYMLINK libspdk_bdev_zone_block.so 00:05:45.801 CC module/bdev/raid/bdev_raid_sb.o 00:05:45.801 SO libspdk_bdev_uring.so.6.0 00:05:45.801 SYMLINK libspdk_bdev_split.so 00:05:45.801 LIB libspdk_bdev_aio.a 00:05:45.801 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:45.801 CC module/bdev/nvme/bdev_mdns_client.o 00:05:45.801 SO libspdk_bdev_aio.so.6.0 00:05:46.059 SYMLINK libspdk_bdev_uring.so 00:05:46.059 CC module/bdev/raid/raid0.o 00:05:46.059 SYMLINK libspdk_bdev_aio.so 00:05:46.059 CC module/bdev/iscsi/bdev_iscsi.o 00:05:46.059 CC module/bdev/nvme/vbdev_opal.o 00:05:46.059 CC module/bdev/raid/raid1.o 00:05:46.059 CC module/bdev/raid/concat.o 00:05:46.059 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:46.059 LIB libspdk_bdev_ftl.a 00:05:46.059 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:46.320 SO libspdk_bdev_ftl.so.6.0 00:05:46.320 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:46.320 SYMLINK libspdk_bdev_ftl.so 00:05:46.320 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:46.320 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:46.320 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:46.320 LIB libspdk_bdev_raid.a 00:05:46.320 SO libspdk_bdev_raid.so.6.0 00:05:46.579 LIB libspdk_bdev_iscsi.a 00:05:46.579 SO libspdk_bdev_iscsi.so.6.0 00:05:46.579 SYMLINK libspdk_bdev_raid.so 00:05:46.579 SYMLINK libspdk_bdev_iscsi.so 00:05:46.838 LIB libspdk_bdev_virtio.a 00:05:46.838 SO libspdk_bdev_virtio.so.6.0 00:05:46.838 SYMLINK libspdk_bdev_virtio.so 00:05:47.096 LIB libspdk_bdev_nvme.a 00:05:47.096 SO libspdk_bdev_nvme.so.7.0 00:05:47.354 SYMLINK libspdk_bdev_nvme.so 00:05:47.921 CC module/event/subsystems/keyring/keyring.o 00:05:47.921 CC module/event/subsystems/sock/sock.o 00:05:47.921 CC module/event/subsystems/vmd/vmd.o 00:05:47.921 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:47.921 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:47.921 CC module/event/subsystems/scheduler/scheduler.o 00:05:47.921 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:47.921 CC module/event/subsystems/iobuf/iobuf.o 00:05:47.921 LIB libspdk_event_keyring.a 00:05:47.921 LIB libspdk_event_vhost_blk.a 00:05:47.921 LIB libspdk_event_sock.a 00:05:47.921 LIB libspdk_event_vmd.a 00:05:47.921 SO libspdk_event_keyring.so.1.0 00:05:47.921 LIB libspdk_event_scheduler.a 00:05:47.921 LIB libspdk_event_iobuf.a 00:05:47.921 SO libspdk_event_vhost_blk.so.3.0 00:05:47.921 SO libspdk_event_sock.so.5.0 00:05:47.921 SO libspdk_event_vmd.so.6.0 00:05:47.921 SO libspdk_event_scheduler.so.4.0 00:05:48.179 SYMLINK libspdk_event_keyring.so 00:05:48.179 SO libspdk_event_iobuf.so.3.0 00:05:48.179 SYMLINK libspdk_event_vhost_blk.so 00:05:48.179 SYMLINK libspdk_event_sock.so 00:05:48.179 SYMLINK libspdk_event_scheduler.so 00:05:48.179 SYMLINK libspdk_event_vmd.so 00:05:48.179 SYMLINK libspdk_event_iobuf.so 00:05:48.437 CC module/event/subsystems/accel/accel.o 00:05:48.695 LIB libspdk_event_accel.a 00:05:48.695 SO libspdk_event_accel.so.6.0 00:05:48.695 SYMLINK libspdk_event_accel.so 00:05:48.953 CC module/event/subsystems/bdev/bdev.o 00:05:49.211 LIB libspdk_event_bdev.a 00:05:49.211 SO libspdk_event_bdev.so.6.0 00:05:49.211 SYMLINK libspdk_event_bdev.so 00:05:49.468 CC module/event/subsystems/scsi/scsi.o 00:05:49.468 CC module/event/subsystems/ublk/ublk.o 00:05:49.468 CC module/event/subsystems/nbd/nbd.o 00:05:49.468 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:49.468 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:49.726 LIB libspdk_event_ublk.a 00:05:49.726 LIB libspdk_event_nbd.a 00:05:49.726 LIB libspdk_event_scsi.a 00:05:49.726 SO libspdk_event_nbd.so.6.0 00:05:49.726 SO libspdk_event_ublk.so.3.0 00:05:49.726 SO libspdk_event_scsi.so.6.0 00:05:49.726 SYMLINK libspdk_event_ublk.so 00:05:49.726 SYMLINK libspdk_event_nbd.so 00:05:49.726 SYMLINK libspdk_event_scsi.so 00:05:49.726 LIB libspdk_event_nvmf.a 00:05:49.726 SO libspdk_event_nvmf.so.6.0 00:05:49.984 SYMLINK libspdk_event_nvmf.so 00:05:49.984 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:49.984 CC module/event/subsystems/iscsi/iscsi.o 00:05:50.242 LIB libspdk_event_vhost_scsi.a 00:05:50.242 SO libspdk_event_vhost_scsi.so.3.0 00:05:50.242 LIB libspdk_event_iscsi.a 00:05:50.242 SO libspdk_event_iscsi.so.6.0 00:05:50.242 SYMLINK libspdk_event_vhost_scsi.so 00:05:50.499 SYMLINK libspdk_event_iscsi.so 00:05:50.499 SO libspdk.so.6.0 00:05:50.499 SYMLINK libspdk.so 00:05:50.757 CC app/spdk_lspci/spdk_lspci.o 00:05:50.757 CC app/spdk_nvme_identify/identify.o 00:05:50.757 CC app/trace_record/trace_record.o 00:05:50.757 CXX app/trace/trace.o 00:05:50.757 CC app/spdk_nvme_perf/perf.o 00:05:50.757 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.757 CC app/nvmf_tgt/nvmf_main.o 00:05:51.029 CC test/thread/poller_perf/poller_perf.o 00:05:51.029 CC examples/util/zipf/zipf.o 00:05:51.029 CC app/spdk_tgt/spdk_tgt.o 00:05:51.029 LINK spdk_lspci 00:05:51.029 LINK nvmf_tgt 00:05:51.029 LINK poller_perf 00:05:51.029 LINK iscsi_tgt 00:05:51.029 LINK spdk_trace_record 00:05:51.029 LINK zipf 00:05:51.286 LINK spdk_tgt 00:05:51.286 CC app/spdk_nvme_discover/discovery_aer.o 00:05:51.286 LINK spdk_trace 00:05:51.286 TEST_HEADER include/spdk/accel.h 00:05:51.286 TEST_HEADER include/spdk/accel_module.h 00:05:51.286 TEST_HEADER include/spdk/assert.h 00:05:51.286 TEST_HEADER include/spdk/barrier.h 00:05:51.286 TEST_HEADER include/spdk/base64.h 00:05:51.286 TEST_HEADER include/spdk/bdev.h 00:05:51.286 TEST_HEADER include/spdk/bdev_module.h 00:05:51.286 TEST_HEADER include/spdk/bdev_zone.h 00:05:51.286 TEST_HEADER include/spdk/bit_array.h 00:05:51.286 TEST_HEADER include/spdk/bit_pool.h 00:05:51.286 TEST_HEADER include/spdk/blob_bdev.h 00:05:51.286 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:51.544 TEST_HEADER include/spdk/blobfs.h 00:05:51.544 TEST_HEADER include/spdk/blob.h 00:05:51.544 TEST_HEADER include/spdk/conf.h 00:05:51.544 TEST_HEADER include/spdk/config.h 00:05:51.544 TEST_HEADER include/spdk/cpuset.h 00:05:51.544 TEST_HEADER include/spdk/crc16.h 00:05:51.544 CC app/spdk_top/spdk_top.o 00:05:51.544 TEST_HEADER include/spdk/crc32.h 00:05:51.544 TEST_HEADER include/spdk/crc64.h 00:05:51.544 TEST_HEADER include/spdk/dif.h 00:05:51.544 TEST_HEADER include/spdk/dma.h 00:05:51.544 TEST_HEADER include/spdk/endian.h 00:05:51.544 TEST_HEADER include/spdk/env_dpdk.h 00:05:51.544 TEST_HEADER include/spdk/env.h 00:05:51.544 TEST_HEADER include/spdk/event.h 00:05:51.544 TEST_HEADER include/spdk/fd_group.h 00:05:51.544 TEST_HEADER include/spdk/fd.h 00:05:51.544 TEST_HEADER include/spdk/file.h 00:05:51.544 TEST_HEADER include/spdk/ftl.h 00:05:51.544 TEST_HEADER include/spdk/gpt_spec.h 00:05:51.545 TEST_HEADER include/spdk/hexlify.h 00:05:51.545 CC test/dma/test_dma/test_dma.o 00:05:51.545 TEST_HEADER include/spdk/histogram_data.h 00:05:51.545 TEST_HEADER include/spdk/idxd.h 00:05:51.545 TEST_HEADER include/spdk/idxd_spec.h 00:05:51.545 TEST_HEADER include/spdk/init.h 00:05:51.545 TEST_HEADER include/spdk/ioat.h 00:05:51.545 TEST_HEADER include/spdk/ioat_spec.h 00:05:51.545 TEST_HEADER include/spdk/iscsi_spec.h 00:05:51.545 TEST_HEADER include/spdk/json.h 00:05:51.545 TEST_HEADER include/spdk/jsonrpc.h 00:05:51.545 CC examples/ioat/perf/perf.o 00:05:51.545 TEST_HEADER include/spdk/keyring.h 00:05:51.545 TEST_HEADER include/spdk/keyring_module.h 00:05:51.545 TEST_HEADER include/spdk/likely.h 00:05:51.545 TEST_HEADER include/spdk/log.h 00:05:51.545 TEST_HEADER include/spdk/lvol.h 00:05:51.545 TEST_HEADER include/spdk/memory.h 00:05:51.545 LINK spdk_nvme_discover 00:05:51.545 TEST_HEADER include/spdk/mmio.h 00:05:51.545 TEST_HEADER include/spdk/nbd.h 00:05:51.545 TEST_HEADER include/spdk/notify.h 00:05:51.545 TEST_HEADER include/spdk/nvme.h 00:05:51.545 TEST_HEADER include/spdk/nvme_intel.h 00:05:51.545 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:51.545 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:51.545 TEST_HEADER include/spdk/nvme_spec.h 00:05:51.545 TEST_HEADER include/spdk/nvme_zns.h 00:05:51.545 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:51.545 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:51.545 TEST_HEADER include/spdk/nvmf.h 00:05:51.545 TEST_HEADER include/spdk/nvmf_spec.h 00:05:51.545 TEST_HEADER include/spdk/nvmf_transport.h 00:05:51.545 TEST_HEADER include/spdk/opal.h 00:05:51.545 TEST_HEADER include/spdk/opal_spec.h 00:05:51.545 CC test/app/bdev_svc/bdev_svc.o 00:05:51.545 TEST_HEADER include/spdk/pci_ids.h 00:05:51.545 TEST_HEADER include/spdk/pipe.h 00:05:51.545 TEST_HEADER include/spdk/queue.h 00:05:51.545 TEST_HEADER include/spdk/reduce.h 00:05:51.545 TEST_HEADER include/spdk/rpc.h 00:05:51.545 TEST_HEADER include/spdk/scheduler.h 00:05:51.545 TEST_HEADER include/spdk/scsi.h 00:05:51.545 TEST_HEADER include/spdk/scsi_spec.h 00:05:51.545 TEST_HEADER include/spdk/sock.h 00:05:51.545 TEST_HEADER include/spdk/stdinc.h 00:05:51.545 TEST_HEADER include/spdk/string.h 00:05:51.545 TEST_HEADER include/spdk/thread.h 00:05:51.545 TEST_HEADER include/spdk/trace.h 00:05:51.545 TEST_HEADER include/spdk/trace_parser.h 00:05:51.545 TEST_HEADER include/spdk/tree.h 00:05:51.545 TEST_HEADER include/spdk/ublk.h 00:05:51.545 TEST_HEADER include/spdk/util.h 00:05:51.545 TEST_HEADER include/spdk/uuid.h 00:05:51.545 TEST_HEADER include/spdk/version.h 00:05:51.545 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:51.545 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:51.545 TEST_HEADER include/spdk/vhost.h 00:05:51.545 TEST_HEADER include/spdk/vmd.h 00:05:51.545 TEST_HEADER include/spdk/xor.h 00:05:51.545 TEST_HEADER include/spdk/zipf.h 00:05:51.545 CXX test/cpp_headers/accel.o 00:05:51.545 LINK spdk_nvme_identify 00:05:51.545 CC app/spdk_dd/spdk_dd.o 00:05:51.802 LINK spdk_nvme_perf 00:05:51.802 CC test/env/mem_callbacks/mem_callbacks.o 00:05:51.802 LINK ioat_perf 00:05:51.802 LINK bdev_svc 00:05:51.802 CXX test/cpp_headers/accel_module.o 00:05:51.802 CXX test/cpp_headers/assert.o 00:05:51.802 CXX test/cpp_headers/barrier.o 00:05:51.802 CC examples/vmd/lsvmd/lsvmd.o 00:05:51.802 LINK test_dma 00:05:52.060 CC examples/ioat/verify/verify.o 00:05:52.060 CXX test/cpp_headers/base64.o 00:05:52.060 LINK lsvmd 00:05:52.060 CC test/app/histogram_perf/histogram_perf.o 00:05:52.060 LINK spdk_dd 00:05:52.060 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:52.060 CC examples/idxd/perf/perf.o 00:05:52.060 CXX test/cpp_headers/bdev.o 00:05:52.060 LINK verify 00:05:52.318 LINK histogram_perf 00:05:52.318 CC test/event/event_perf/event_perf.o 00:05:52.318 LINK spdk_top 00:05:52.318 CC examples/vmd/led/led.o 00:05:52.318 CXX test/cpp_headers/bdev_module.o 00:05:52.318 CXX test/cpp_headers/bdev_zone.o 00:05:52.318 LINK mem_callbacks 00:05:52.318 CC test/event/reactor/reactor.o 00:05:52.575 LINK event_perf 00:05:52.575 LINK led 00:05:52.575 LINK idxd_perf 00:05:52.575 LINK nvme_fuzz 00:05:52.575 CC test/nvme/aer/aer.o 00:05:52.575 LINK reactor 00:05:52.575 CC test/env/vtophys/vtophys.o 00:05:52.575 CXX test/cpp_headers/bit_array.o 00:05:52.575 CC app/fio/nvme/fio_plugin.o 00:05:52.833 CC test/event/reactor_perf/reactor_perf.o 00:05:52.833 CC app/fio/bdev/fio_plugin.o 00:05:52.833 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:52.833 LINK vtophys 00:05:52.833 CXX test/cpp_headers/bit_pool.o 00:05:52.833 CC test/event/app_repeat/app_repeat.o 00:05:52.833 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:52.833 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:52.833 LINK aer 00:05:52.833 LINK reactor_perf 00:05:53.094 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:53.094 CXX test/cpp_headers/blob_bdev.o 00:05:53.094 LINK app_repeat 00:05:53.094 LINK interrupt_tgt 00:05:53.094 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:53.094 CC test/env/memory/memory_ut.o 00:05:53.094 CC test/nvme/reset/reset.o 00:05:53.352 LINK env_dpdk_post_init 00:05:53.352 CXX test/cpp_headers/blobfs_bdev.o 00:05:53.352 LINK spdk_bdev 00:05:53.352 LINK spdk_nvme 00:05:53.352 CC test/event/scheduler/scheduler.o 00:05:53.352 LINK vhost_fuzz 00:05:53.352 CXX test/cpp_headers/blobfs.o 00:05:53.352 CC examples/thread/thread/thread_ex.o 00:05:53.352 LINK reset 00:05:53.352 CC test/nvme/sgl/sgl.o 00:05:53.611 CC test/env/pci/pci_ut.o 00:05:53.611 CC app/vhost/vhost.o 00:05:53.611 LINK scheduler 00:05:53.611 CC test/app/jsoncat/jsoncat.o 00:05:53.611 CXX test/cpp_headers/blob.o 00:05:53.870 CC test/nvme/e2edp/nvme_dp.o 00:05:53.870 LINK thread 00:05:53.870 LINK sgl 00:05:53.870 LINK jsoncat 00:05:53.870 LINK vhost 00:05:53.870 CXX test/cpp_headers/conf.o 00:05:53.870 LINK pci_ut 00:05:53.870 CC test/nvme/overhead/overhead.o 00:05:54.128 CXX test/cpp_headers/config.o 00:05:54.128 LINK nvme_dp 00:05:54.128 CXX test/cpp_headers/cpuset.o 00:05:54.128 CC test/app/stub/stub.o 00:05:54.128 CC test/nvme/err_injection/err_injection.o 00:05:54.128 CC examples/sock/hello_world/hello_sock.o 00:05:54.128 CXX test/cpp_headers/crc16.o 00:05:54.128 CC examples/accel/perf/accel_perf.o 00:05:54.128 LINK overhead 00:05:54.387 LINK err_injection 00:05:54.387 LINK stub 00:05:54.387 CC test/nvme/startup/startup.o 00:05:54.387 LINK memory_ut 00:05:54.387 CXX test/cpp_headers/crc32.o 00:05:54.387 LINK hello_sock 00:05:54.387 CC examples/blob/hello_world/hello_blob.o 00:05:54.387 LINK iscsi_fuzz 00:05:54.387 CC test/nvme/reserve/reserve.o 00:05:54.646 LINK startup 00:05:54.646 CC test/nvme/simple_copy/simple_copy.o 00:05:54.646 CC test/nvme/connect_stress/connect_stress.o 00:05:54.646 CXX test/cpp_headers/crc64.o 00:05:54.646 CXX test/cpp_headers/dif.o 00:05:54.646 CXX test/cpp_headers/dma.o 00:05:54.646 LINK accel_perf 00:05:54.646 LINK hello_blob 00:05:54.646 LINK reserve 00:05:54.904 LINK connect_stress 00:05:54.904 CXX test/cpp_headers/endian.o 00:05:54.904 CC test/nvme/boot_partition/boot_partition.o 00:05:54.904 LINK simple_copy 00:05:54.904 CC test/nvme/fused_ordering/fused_ordering.o 00:05:54.904 CC test/nvme/compliance/nvme_compliance.o 00:05:54.904 CC examples/nvme/hello_world/hello_world.o 00:05:54.904 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:54.904 CXX test/cpp_headers/env_dpdk.o 00:05:54.904 LINK boot_partition 00:05:54.904 CC test/nvme/fdp/fdp.o 00:05:55.162 CC examples/blob/cli/blobcli.o 00:05:55.162 CC examples/nvme/reconnect/reconnect.o 00:05:55.162 LINK fused_ordering 00:05:55.162 CXX test/cpp_headers/env.o 00:05:55.162 LINK hello_world 00:05:55.162 LINK doorbell_aers 00:05:55.162 LINK nvme_compliance 00:05:55.162 CC test/nvme/cuse/cuse.o 00:05:55.419 CC examples/bdev/hello_world/hello_bdev.o 00:05:55.419 CXX test/cpp_headers/event.o 00:05:55.419 LINK fdp 00:05:55.419 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:55.419 CC examples/nvme/arbitration/arbitration.o 00:05:55.419 LINK reconnect 00:05:55.419 CC examples/nvme/hotplug/hotplug.o 00:05:55.419 CXX test/cpp_headers/fd_group.o 00:05:55.419 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:55.419 LINK blobcli 00:05:55.419 CXX test/cpp_headers/fd.o 00:05:55.419 LINK hello_bdev 00:05:55.676 CC examples/nvme/abort/abort.o 00:05:55.676 CXX test/cpp_headers/file.o 00:05:55.676 LINK hotplug 00:05:55.676 LINK cmb_copy 00:05:55.676 LINK arbitration 00:05:55.934 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:55.934 CC test/rpc_client/rpc_client_test.o 00:05:55.934 CXX test/cpp_headers/ftl.o 00:05:55.934 CXX test/cpp_headers/gpt_spec.o 00:05:55.934 CC examples/bdev/bdevperf/bdevperf.o 00:05:55.934 LINK nvme_manage 00:05:55.934 CXX test/cpp_headers/hexlify.o 00:05:55.934 LINK pmr_persistence 00:05:56.192 CXX test/cpp_headers/histogram_data.o 00:05:56.192 CXX test/cpp_headers/idxd.o 00:05:56.192 LINK abort 00:05:56.192 LINK rpc_client_test 00:05:56.192 CC test/accel/dif/dif.o 00:05:56.192 CXX test/cpp_headers/idxd_spec.o 00:05:56.192 CXX test/cpp_headers/init.o 00:05:56.192 CXX test/cpp_headers/ioat.o 00:05:56.192 CXX test/cpp_headers/ioat_spec.o 00:05:56.192 CXX test/cpp_headers/iscsi_spec.o 00:05:56.192 CXX test/cpp_headers/json.o 00:05:56.449 CC test/blobfs/mkfs/mkfs.o 00:05:56.449 CXX test/cpp_headers/jsonrpc.o 00:05:56.449 CXX test/cpp_headers/keyring.o 00:05:56.449 CXX test/cpp_headers/keyring_module.o 00:05:56.449 CXX test/cpp_headers/likely.o 00:05:56.449 CXX test/cpp_headers/log.o 00:05:56.449 CXX test/cpp_headers/lvol.o 00:05:56.449 LINK mkfs 00:05:56.449 CC test/lvol/esnap/esnap.o 00:05:56.449 LINK dif 00:05:56.706 CXX test/cpp_headers/memory.o 00:05:56.706 CXX test/cpp_headers/mmio.o 00:05:56.706 CXX test/cpp_headers/nbd.o 00:05:56.706 LINK bdevperf 00:05:56.706 CXX test/cpp_headers/notify.o 00:05:56.706 CXX test/cpp_headers/nvme.o 00:05:56.706 CXX test/cpp_headers/nvme_intel.o 00:05:56.706 LINK cuse 00:05:56.706 CXX test/cpp_headers/nvme_ocssd.o 00:05:56.706 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:56.706 CXX test/cpp_headers/nvme_spec.o 00:05:56.706 CXX test/cpp_headers/nvme_zns.o 00:05:56.992 CXX test/cpp_headers/nvmf_cmd.o 00:05:56.992 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:56.992 CXX test/cpp_headers/nvmf.o 00:05:56.992 CXX test/cpp_headers/nvmf_spec.o 00:05:56.992 CXX test/cpp_headers/nvmf_transport.o 00:05:56.992 CXX test/cpp_headers/opal.o 00:05:56.992 CXX test/cpp_headers/opal_spec.o 00:05:56.992 CXX test/cpp_headers/pci_ids.o 00:05:56.992 CC test/bdev/bdevio/bdevio.o 00:05:56.992 CC examples/nvmf/nvmf/nvmf.o 00:05:56.992 CXX test/cpp_headers/pipe.o 00:05:56.992 CXX test/cpp_headers/queue.o 00:05:57.250 CXX test/cpp_headers/reduce.o 00:05:57.250 CXX test/cpp_headers/rpc.o 00:05:57.250 CXX test/cpp_headers/scheduler.o 00:05:57.250 CXX test/cpp_headers/scsi.o 00:05:57.250 CXX test/cpp_headers/scsi_spec.o 00:05:57.250 CXX test/cpp_headers/sock.o 00:05:57.250 CXX test/cpp_headers/stdinc.o 00:05:57.250 CXX test/cpp_headers/string.o 00:05:57.250 CXX test/cpp_headers/thread.o 00:05:57.509 CXX test/cpp_headers/trace.o 00:05:57.509 LINK nvmf 00:05:57.509 CXX test/cpp_headers/trace_parser.o 00:05:57.509 CXX test/cpp_headers/tree.o 00:05:57.509 CXX test/cpp_headers/ublk.o 00:05:57.509 CXX test/cpp_headers/util.o 00:05:57.509 CXX test/cpp_headers/uuid.o 00:05:57.509 CXX test/cpp_headers/version.o 00:05:57.509 LINK bdevio 00:05:57.509 CXX test/cpp_headers/vfio_user_pci.o 00:05:57.509 CXX test/cpp_headers/vfio_user_spec.o 00:05:57.509 CXX test/cpp_headers/vhost.o 00:05:57.509 CXX test/cpp_headers/vmd.o 00:05:57.509 CXX test/cpp_headers/xor.o 00:05:57.509 CXX test/cpp_headers/zipf.o 00:06:01.691 LINK esnap 00:06:01.949 00:06:01.949 real 1m7.729s 00:06:01.949 user 6m35.454s 00:06:01.949 sys 1m37.474s 00:06:01.949 23:06:24 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:06:01.949 23:06:24 make -- common/autotest_common.sh@10 -- $ set +x 00:06:01.949 ************************************ 00:06:01.949 END TEST make 00:06:01.949 ************************************ 00:06:02.206 23:06:24 -- common/autotest_common.sh@1142 -- $ return 0 00:06:02.206 23:06:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:02.206 23:06:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:02.206 23:06:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:02.206 23:06:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.206 23:06:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:02.206 23:06:24 -- pm/common@44 -- $ pid=5303 00:06:02.206 23:06:24 -- pm/common@50 -- $ kill -TERM 5303 00:06:02.206 23:06:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.206 23:06:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:02.206 23:06:24 -- pm/common@44 -- $ pid=5305 00:06:02.206 23:06:24 -- pm/common@50 -- $ kill -TERM 5305 00:06:02.206 23:06:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.206 23:06:24 -- nvmf/common.sh@7 -- # uname -s 00:06:02.206 23:06:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.206 23:06:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.206 23:06:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.206 23:06:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.206 23:06:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.206 23:06:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.206 23:06:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.206 23:06:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.206 23:06:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.206 23:06:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.207 23:06:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:06:02.207 23:06:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:06:02.207 23:06:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.207 23:06:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.207 23:06:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:02.207 23:06:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.207 23:06:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.207 23:06:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.207 23:06:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.207 23:06:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.207 23:06:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.207 23:06:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.207 23:06:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.207 23:06:24 -- paths/export.sh@5 -- # export PATH 00:06:02.207 23:06:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.207 23:06:24 -- nvmf/common.sh@47 -- # : 0 00:06:02.207 23:06:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.207 23:06:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.207 23:06:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.207 23:06:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.207 23:06:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.207 23:06:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.207 23:06:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.207 23:06:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.207 23:06:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:02.207 23:06:24 -- spdk/autotest.sh@32 -- # uname -s 00:06:02.207 23:06:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:02.207 23:06:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:02.207 23:06:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:02.207 23:06:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:02.207 23:06:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:02.207 23:06:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:02.207 23:06:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:02.207 23:06:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:02.207 23:06:24 -- spdk/autotest.sh@48 -- # udevadm_pid=52960 00:06:02.207 23:06:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:02.207 23:06:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:02.207 23:06:24 -- pm/common@17 -- # local monitor 00:06:02.207 23:06:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.207 23:06:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.207 23:06:24 -- pm/common@25 -- # sleep 1 00:06:02.207 23:06:24 -- pm/common@21 -- # date +%s 00:06:02.207 23:06:24 -- pm/common@21 -- # date +%s 00:06:02.207 23:06:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721862384 00:06:02.207 23:06:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721862384 00:06:02.207 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721862384_collect-vmstat.pm.log 00:06:02.207 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721862384_collect-cpu-load.pm.log 00:06:03.140 23:06:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:03.140 23:06:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:03.140 23:06:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.140 23:06:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.140 23:06:25 -- spdk/autotest.sh@59 -- # create_test_list 00:06:03.140 23:06:25 -- common/autotest_common.sh@746 -- # xtrace_disable 00:06:03.140 23:06:25 -- common/autotest_common.sh@10 -- # set +x 00:06:03.399 23:06:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:03.399 23:06:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:03.399 23:06:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:03.399 23:06:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:03.399 23:06:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:03.399 23:06:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:03.399 23:06:25 -- common/autotest_common.sh@1455 -- # uname 00:06:03.399 23:06:25 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:03.399 23:06:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:03.399 23:06:25 -- common/autotest_common.sh@1475 -- # uname 00:06:03.399 23:06:25 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:03.399 23:06:25 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:06:03.399 23:06:25 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:06:03.399 23:06:25 -- spdk/autotest.sh@72 -- # hash lcov 00:06:03.399 23:06:25 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:03.399 23:06:25 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:06:03.399 --rc lcov_branch_coverage=1 00:06:03.399 --rc lcov_function_coverage=1 00:06:03.399 --rc genhtml_branch_coverage=1 00:06:03.399 --rc genhtml_function_coverage=1 00:06:03.399 --rc genhtml_legend=1 00:06:03.399 --rc geninfo_all_blocks=1 00:06:03.399 ' 00:06:03.399 23:06:25 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:06:03.399 --rc lcov_branch_coverage=1 00:06:03.399 --rc lcov_function_coverage=1 00:06:03.399 --rc genhtml_branch_coverage=1 00:06:03.399 --rc genhtml_function_coverage=1 00:06:03.399 --rc genhtml_legend=1 00:06:03.399 --rc geninfo_all_blocks=1 00:06:03.399 ' 00:06:03.399 23:06:25 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:06:03.399 --rc lcov_branch_coverage=1 00:06:03.399 --rc lcov_function_coverage=1 00:06:03.399 --rc genhtml_branch_coverage=1 00:06:03.399 --rc genhtml_function_coverage=1 00:06:03.399 --rc genhtml_legend=1 00:06:03.399 --rc geninfo_all_blocks=1 00:06:03.399 --no-external' 00:06:03.399 23:06:25 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:06:03.399 --rc lcov_branch_coverage=1 00:06:03.399 --rc lcov_function_coverage=1 00:06:03.399 --rc genhtml_branch_coverage=1 00:06:03.399 --rc genhtml_function_coverage=1 00:06:03.399 --rc genhtml_legend=1 00:06:03.399 --rc geninfo_all_blocks=1 00:06:03.399 --no-external' 00:06:03.399 23:06:25 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:06:03.399 lcov: LCOV version 1.14 00:06:03.399 23:06:25 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:18.266 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:18.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:30.461 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:30.461 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:30.720 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:30.720 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:30.721 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:30.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:30.979 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:30.979 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:30.980 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:30.980 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:31.236 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:31.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:31.236 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:31.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:31.236 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:31.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:31.236 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:31.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:31.236 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:31.236 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:34.517 23:06:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:34.517 23:06:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.517 23:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:34.517 23:06:56 -- spdk/autotest.sh@91 -- # rm -f 00:06:34.517 23:06:56 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:35.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:35.451 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:35.451 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:35.451 23:06:57 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:35.451 23:06:57 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:35.451 23:06:57 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:35.451 23:06:57 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:35.451 23:06:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:35.451 23:06:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:35.451 23:06:57 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:35.451 23:06:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:35.451 23:06:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:35.451 23:06:57 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:35.451 23:06:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:35.451 23:06:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:35.451 23:06:57 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:35.451 23:06:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:35.451 23:06:57 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:35.451 23:06:57 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:35.451 23:06:57 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:35.451 23:06:57 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:35.451 23:06:57 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:35.451 23:06:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:35.451 23:06:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:35.451 23:06:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:35.451 23:06:57 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:35.451 23:06:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:35.451 No valid GPT data, bailing 00:06:35.451 23:06:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:35.451 23:06:57 -- scripts/common.sh@391 -- # pt= 00:06:35.451 23:06:57 -- scripts/common.sh@392 -- # return 1 00:06:35.451 23:06:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:35.451 1+0 records in 00:06:35.451 1+0 records out 00:06:35.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005067 s, 207 MB/s 00:06:35.451 23:06:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:35.451 23:06:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:35.451 23:06:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:06:35.451 23:06:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:06:35.451 23:06:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:35.451 No valid GPT data, bailing 00:06:35.451 23:06:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:35.451 23:06:57 -- scripts/common.sh@391 -- # pt= 00:06:35.451 23:06:57 -- scripts/common.sh@392 -- # return 1 00:06:35.451 23:06:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:35.451 1+0 records in 00:06:35.451 1+0 records out 00:06:35.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457076 s, 229 MB/s 00:06:35.451 23:06:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:35.451 23:06:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:35.451 23:06:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:06:35.452 23:06:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:06:35.452 23:06:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:35.452 No valid GPT data, bailing 00:06:35.710 23:06:57 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:35.710 23:06:57 -- scripts/common.sh@391 -- # pt= 00:06:35.710 23:06:57 -- scripts/common.sh@392 -- # return 1 00:06:35.710 23:06:57 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:35.710 1+0 records in 00:06:35.710 1+0 records out 00:06:35.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412074 s, 254 MB/s 00:06:35.710 23:06:57 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:35.710 23:06:57 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:35.710 23:06:57 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:06:35.710 23:06:57 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:06:35.710 23:06:57 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:35.710 No valid GPT data, bailing 00:06:35.710 23:06:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:35.710 23:06:58 -- scripts/common.sh@391 -- # pt= 00:06:35.710 23:06:58 -- scripts/common.sh@392 -- # return 1 00:06:35.710 23:06:58 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:35.710 1+0 records in 00:06:35.710 1+0 records out 00:06:35.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495081 s, 212 MB/s 00:06:35.710 23:06:58 -- spdk/autotest.sh@118 -- # sync 00:06:35.710 23:06:58 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:35.710 23:06:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:35.710 23:06:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:37.683 23:06:59 -- spdk/autotest.sh@124 -- # uname -s 00:06:37.683 23:06:59 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:37.683 23:06:59 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:37.683 23:06:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.683 23:06:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.683 23:06:59 -- common/autotest_common.sh@10 -- # set +x 00:06:37.683 ************************************ 00:06:37.683 START TEST setup.sh 00:06:37.683 ************************************ 00:06:37.683 23:06:59 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:06:37.683 * Looking for test storage... 00:06:37.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:37.683 23:06:59 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:06:37.683 23:06:59 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:37.683 23:06:59 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:37.683 23:06:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.683 23:06:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.683 23:06:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:37.683 ************************************ 00:06:37.683 START TEST acl 00:06:37.683 ************************************ 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:06:37.683 * Looking for test storage... 00:06:37.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:37.683 23:06:59 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:37.683 23:06:59 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:37.683 23:06:59 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:06:37.683 23:06:59 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:06:37.683 23:06:59 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:06:37.683 23:06:59 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:06:37.683 23:06:59 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:06:37.683 23:06:59 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:37.683 23:06:59 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:38.247 23:07:00 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:06:38.247 23:07:00 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:06:38.247 23:07:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:38.247 23:07:00 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:06:38.248 23:07:00 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:06:38.248 23:07:00 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.182 Hugepages 00:06:39.182 node hugesize free / total 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.182 00:06:39.182 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:06:39.182 23:07:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:06:39.182 23:07:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.182 23:07:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.182 23:07:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:39.182 ************************************ 00:06:39.182 START TEST denied 00:06:39.182 ************************************ 00:06:39.182 23:07:01 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:06:39.182 23:07:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:06:39.182 23:07:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:06:39.183 23:07:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:06:39.183 23:07:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:06:39.183 23:07:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:40.117 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:40.117 23:07:02 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:40.682 00:06:40.682 real 0m1.441s 00:06:40.682 user 0m0.547s 00:06:40.682 sys 0m0.841s 00:06:40.682 23:07:03 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.682 ************************************ 00:06:40.682 END TEST denied 00:06:40.682 ************************************ 00:06:40.682 23:07:03 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:06:40.682 23:07:03 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:40.682 23:07:03 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:40.682 23:07:03 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:40.682 23:07:03 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.682 23:07:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:40.682 ************************************ 00:06:40.682 START TEST allowed 00:06:40.682 ************************************ 00:06:40.682 23:07:03 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:06:40.682 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:06:40.682 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:06:40.682 23:07:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:06:40.682 23:07:03 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:40.682 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:06:41.615 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:41.615 23:07:03 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:42.181 00:06:42.181 real 0m1.453s 00:06:42.181 user 0m0.656s 00:06:42.181 sys 0m0.786s 00:06:42.181 23:07:04 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.181 23:07:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:42.181 ************************************ 00:06:42.181 END TEST allowed 00:06:42.181 ************************************ 00:06:42.181 23:07:04 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:06:42.181 ************************************ 00:06:42.181 END TEST acl 00:06:42.181 ************************************ 00:06:42.181 00:06:42.181 real 0m4.697s 00:06:42.181 user 0m2.032s 00:06:42.181 sys 0m2.607s 00:06:42.181 23:07:04 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.181 23:07:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:42.181 23:07:04 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:42.181 23:07:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:42.181 23:07:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.181 23:07:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.181 23:07:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:42.181 ************************************ 00:06:42.181 START TEST hugepages 00:06:42.181 ************************************ 00:06:42.181 23:07:04 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:06:42.443 * Looking for test storage... 00:06:42.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6032688 kB' 'MemAvailable: 7412212 kB' 'Buffers: 2436 kB' 'Cached: 1593752 kB' 'SwapCached: 0 kB' 'Active: 435092 kB' 'Inactive: 1264840 kB' 'Active(anon): 114232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264840 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 105352 kB' 'Mapped: 48692 kB' 'Shmem: 10488 kB' 'KReclaimable: 61524 kB' 'Slab: 133032 kB' 'SReclaimable: 61524 kB' 'SUnreclaim: 71508 kB' 'KernelStack: 6316 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 334120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.443 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:42.444 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:42.445 23:07:04 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:42.445 23:07:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.445 23:07:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.445 23:07:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:42.445 ************************************ 00:06:42.445 START TEST default_setup 00:06:42.445 ************************************ 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:42.445 23:07:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.027 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.290 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133748 kB' 'MemAvailable: 9513108 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452164 kB' 'Inactive: 1264856 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132636 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71472 kB' 'KernelStack: 6304 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.290 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.291 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132992 kB' 'MemAvailable: 9512352 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452120 kB' 'Inactive: 1264856 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122416 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132636 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71472 kB' 'KernelStack: 6304 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.292 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.293 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:43.294 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132992 kB' 'MemAvailable: 9512352 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452120 kB' 'Inactive: 1264856 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122392 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132636 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71472 kB' 'KernelStack: 6288 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.295 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.296 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:43.297 nr_hugepages=1024 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:43.297 resv_hugepages=0 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:43.297 surplus_hugepages=0 00:06:43.297 anon_hugepages=0 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.297 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8132992 kB' 'MemAvailable: 9512352 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452040 kB' 'Inactive: 1264856 kB' 'Active(anon): 131180 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122348 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132636 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71472 kB' 'KernelStack: 6304 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.298 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.299 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8133244 kB' 'MemUsed: 4108732 kB' 'SwapCached: 0 kB' 'Active: 452108 kB' 'Inactive: 1264856 kB' 'Active(anon): 131248 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1596180 kB' 'Mapped: 48664 kB' 'AnonPages: 122472 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132636 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.300 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:43.301 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:43.302 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:43.302 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:43.302 node0=1024 expecting 1024 00:06:43.302 23:07:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:43.302 00:06:43.302 real 0m0.952s 00:06:43.302 user 0m0.431s 00:06:43.302 sys 0m0.450s 00:06:43.302 23:07:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.302 ************************************ 00:06:43.302 END TEST default_setup 00:06:43.302 ************************************ 00:06:43.302 23:07:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:43.560 23:07:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:43.560 23:07:05 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:43.560 23:07:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.560 23:07:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.560 23:07:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:43.560 ************************************ 00:06:43.560 START TEST per_node_1G_alloc 00:06:43.560 ************************************ 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:43.560 23:07:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.822 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:43.822 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9186136 kB' 'MemAvailable: 10565496 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452900 kB' 'Inactive: 1264856 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132720 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71556 kB' 'KernelStack: 6308 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.823 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9186312 kB' 'MemAvailable: 10565672 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452056 kB' 'Inactive: 1264856 kB' 'Active(anon): 131196 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122304 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132712 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71548 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.824 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.825 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.826 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9186312 kB' 'MemAvailable: 10565672 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452124 kB' 'Inactive: 1264856 kB' 'Active(anon): 131264 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132708 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71544 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.827 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.828 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:43.829 nr_hugepages=512 00:06:43.829 resv_hugepages=0 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:43.829 surplus_hugepages=0 00:06:43.829 anon_hugepages=0 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9186988 kB' 'MemAvailable: 10566348 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452132 kB' 'Inactive: 1264856 kB' 'Active(anon): 131272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122428 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132708 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71544 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.829 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:43.830 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9187692 kB' 'MemUsed: 3054284 kB' 'SwapCached: 0 kB' 'Active: 451928 kB' 'Inactive: 1264856 kB' 'Active(anon): 131068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1596180 kB' 'Mapped: 48664 kB' 'AnonPages: 122432 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132708 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.090 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.091 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:44.092 node0=512 expecting 512 00:06:44.092 ************************************ 00:06:44.092 END TEST per_node_1G_alloc 00:06:44.092 ************************************ 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:44.092 00:06:44.092 real 0m0.552s 00:06:44.092 user 0m0.255s 00:06:44.092 sys 0m0.302s 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.092 23:07:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:44.092 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:44.092 23:07:06 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:44.092 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.092 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.092 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:44.092 ************************************ 00:06:44.092 START TEST even_2G_alloc 00:06:44.092 ************************************ 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:44.092 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.353 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:44.353 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8141208 kB' 'MemAvailable: 9520568 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452208 kB' 'Inactive: 1264856 kB' 'Active(anon): 131348 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122464 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132724 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71560 kB' 'KernelStack: 6276 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.353 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8140956 kB' 'MemAvailable: 9520316 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452276 kB' 'Inactive: 1264856 kB' 'Active(anon): 131416 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122532 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132724 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71560 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.354 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.355 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.356 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.617 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8141208 kB' 'MemAvailable: 9520568 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452168 kB' 'Inactive: 1264856 kB' 'Active(anon): 131308 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122428 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132724 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71560 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.618 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.619 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:44.620 nr_hugepages=1024 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:44.620 resv_hugepages=0 00:06:44.620 surplus_hugepages=0 00:06:44.620 anon_hugepages=0 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8141208 kB' 'MemAvailable: 9520568 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452064 kB' 'Inactive: 1264856 kB' 'Active(anon): 131204 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122308 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132704 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71540 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.620 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.621 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8141836 kB' 'MemUsed: 4100140 kB' 'SwapCached: 0 kB' 'Active: 452188 kB' 'Inactive: 1264856 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1596180 kB' 'Mapped: 48664 kB' 'AnonPages: 122436 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132704 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.622 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:44.623 node0=1024 expecting 1024 00:06:44.623 ************************************ 00:06:44.623 END TEST even_2G_alloc 00:06:44.623 ************************************ 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:44.623 00:06:44.623 real 0m0.555s 00:06:44.623 user 0m0.269s 00:06:44.623 sys 0m0.297s 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.623 23:07:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:44.623 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:44.623 23:07:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:44.623 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.623 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.623 23:07:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:44.623 ************************************ 00:06:44.623 START TEST odd_alloc 00:06:44.623 ************************************ 00:06:44.623 23:07:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:06:44.623 23:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:44.623 23:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:44.623 23:07:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:44.623 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:44.624 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:44.882 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:44.882 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:44.882 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.145 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8145452 kB' 'MemAvailable: 9524812 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452528 kB' 'Inactive: 1264856 kB' 'Active(anon): 131668 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122808 kB' 'Mapped: 48812 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132604 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71440 kB' 'KernelStack: 6324 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.146 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8145452 kB' 'MemAvailable: 9524812 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452524 kB' 'Inactive: 1264856 kB' 'Active(anon): 131664 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122784 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132604 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71440 kB' 'KernelStack: 6276 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.147 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.148 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8145200 kB' 'MemAvailable: 9524560 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452192 kB' 'Inactive: 1264856 kB' 'Active(anon): 131332 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132612 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71448 kB' 'KernelStack: 6304 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.149 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.150 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:45.151 nr_hugepages=1025 00:06:45.151 resv_hugepages=0 00:06:45.151 surplus_hugepages=0 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:45.151 anon_hugepages=0 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8145200 kB' 'MemAvailable: 9524560 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452088 kB' 'Inactive: 1264856 kB' 'Active(anon): 131228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122380 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132608 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 351492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.151 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.152 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8145620 kB' 'MemUsed: 4096356 kB' 'SwapCached: 0 kB' 'Active: 451872 kB' 'Inactive: 1264852 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1596176 kB' 'Mapped: 48664 kB' 'AnonPages: 122244 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132608 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.153 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:45.154 node0=1025 expecting 1025 00:06:45.154 ************************************ 00:06:45.154 END TEST odd_alloc 00:06:45.154 ************************************ 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:06:45.154 00:06:45.154 real 0m0.560s 00:06:45.154 user 0m0.283s 00:06:45.154 sys 0m0.285s 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.154 23:07:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:45.154 23:07:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:45.154 23:07:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:45.154 23:07:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.154 23:07:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.154 23:07:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:45.154 ************************************ 00:06:45.154 START TEST custom_alloc 00:06:45.154 ************************************ 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:45.154 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:45.155 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:45.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.774 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.774 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.774 23:07:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9194848 kB' 'MemAvailable: 10574208 kB' 'Buffers: 2436 kB' 'Cached: 1593744 kB' 'SwapCached: 0 kB' 'Active: 452648 kB' 'Inactive: 1264856 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122896 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132628 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71464 kB' 'KernelStack: 6292 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.774 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9197440 kB' 'MemAvailable: 10576804 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 451808 kB' 'Inactive: 1264860 kB' 'Active(anon): 130948 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132620 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71456 kB' 'KernelStack: 6304 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.775 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9197440 kB' 'MemAvailable: 10576804 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 451768 kB' 'Inactive: 1264860 kB' 'Active(anon): 130908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132616 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6272 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.776 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:45.777 nr_hugepages=512 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:06:45.777 resv_hugepages=0 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:45.777 surplus_hugepages=0 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:45.777 anon_hugepages=0 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9197440 kB' 'MemAvailable: 10576804 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 451728 kB' 'Inactive: 1264860 kB' 'Active(anon): 130868 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122312 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132616 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71452 kB' 'KernelStack: 6256 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.777 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9197440 kB' 'MemUsed: 3044536 kB' 'SwapCached: 0 kB' 'Active: 451820 kB' 'Inactive: 1264860 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1596184 kB' 'Mapped: 48668 kB' 'AnonPages: 122424 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132616 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.778 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:45.779 node0=512 expecting 512 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:45.779 00:06:45.779 real 0m0.527s 00:06:45.779 user 0m0.253s 00:06:45.779 sys 0m0.301s 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.779 23:07:08 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:45.779 ************************************ 00:06:45.779 END TEST custom_alloc 00:06:45.779 ************************************ 00:06:45.779 23:07:08 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:45.779 23:07:08 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:45.779 23:07:08 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.779 23:07:08 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.779 23:07:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:45.779 ************************************ 00:06:45.779 START TEST no_shrink_alloc 00:06:45.779 ************************************ 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:45.779 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.301 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.301 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8148424 kB' 'MemAvailable: 9527788 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 452224 kB' 'Inactive: 1264860 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122524 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132556 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 6256 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.301 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.302 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8148424 kB' 'MemAvailable: 9527788 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 452012 kB' 'Inactive: 1264860 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132608 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6320 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.303 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.304 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8148424 kB' 'MemAvailable: 9527788 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 452012 kB' 'Inactive: 1264860 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122292 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132608 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71444 kB' 'KernelStack: 6320 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.305 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.306 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:46.307 nr_hugepages=1024 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:46.307 resv_hugepages=0 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:46.307 surplus_hugepages=0 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:46.307 anon_hugepages=0 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8148424 kB' 'MemAvailable: 9527788 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 452248 kB' 'Inactive: 1264860 kB' 'Active(anon): 131388 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122524 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61164 kB' 'Slab: 132604 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71440 kB' 'KernelStack: 6288 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 351376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.307 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.308 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8148424 kB' 'MemUsed: 4093552 kB' 'SwapCached: 0 kB' 'Active: 451936 kB' 'Inactive: 1264860 kB' 'Active(anon): 131076 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1596184 kB' 'Mapped: 48668 kB' 'AnonPages: 122208 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61164 kB' 'Slab: 132596 kB' 'SReclaimable: 61164 kB' 'SUnreclaim: 71432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.309 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:46.310 node0=1024 expecting 1024 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:46.310 23:07:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.569 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.569 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:46.569 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.569 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8150024 kB' 'MemAvailable: 9529384 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 448440 kB' 'Inactive: 1264860 kB' 'Active(anon): 127580 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 119008 kB' 'Mapped: 48112 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132496 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71336 kB' 'KernelStack: 6144 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.832 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.833 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8150168 kB' 'MemAvailable: 9529528 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 448004 kB' 'Inactive: 1264860 kB' 'Active(anon): 127144 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118252 kB' 'Mapped: 47964 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132464 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71304 kB' 'KernelStack: 6208 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.834 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.835 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8150168 kB' 'MemAvailable: 9529528 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 447700 kB' 'Inactive: 1264860 kB' 'Active(anon): 126840 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118240 kB' 'Mapped: 47972 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132460 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71300 kB' 'KernelStack: 6192 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.836 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.837 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:46.838 nr_hugepages=1024 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:46.838 resv_hugepages=0 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:46.838 surplus_hugepages=0 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:46.838 anon_hugepages=0 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8150168 kB' 'MemAvailable: 9529528 kB' 'Buffers: 2436 kB' 'Cached: 1593748 kB' 'SwapCached: 0 kB' 'Active: 447952 kB' 'Inactive: 1264860 kB' 'Active(anon): 127092 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118236 kB' 'Mapped: 47972 kB' 'Shmem: 10464 kB' 'KReclaimable: 61160 kB' 'Slab: 132460 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71300 kB' 'KernelStack: 6192 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 336356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 5072896 kB' 'DirectMap1G: 9437184 kB' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.838 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.839 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8150168 kB' 'MemUsed: 4091808 kB' 'SwapCached: 0 kB' 'Active: 447772 kB' 'Inactive: 1264860 kB' 'Active(anon): 126912 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1264860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1596184 kB' 'Mapped: 47972 kB' 'AnonPages: 118320 kB' 'Shmem: 10464 kB' 'KernelStack: 6208 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61160 kB' 'Slab: 132460 kB' 'SReclaimable: 61160 kB' 'SUnreclaim: 71300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.840 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:46.841 node0=1024 expecting 1024 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:46.841 00:06:46.841 real 0m1.058s 00:06:46.841 user 0m0.516s 00:06:46.841 sys 0m0.580s 00:06:46.841 23:07:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.842 23:07:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:46.842 ************************************ 00:06:46.842 END TEST no_shrink_alloc 00:06:46.842 ************************************ 00:06:46.842 23:07:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:46.842 23:07:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:46.842 00:06:46.842 real 0m4.654s 00:06:46.842 user 0m2.158s 00:06:46.842 sys 0m2.485s 00:06:46.842 23:07:09 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.842 ************************************ 00:06:46.842 END TEST hugepages 00:06:46.842 ************************************ 00:06:46.842 23:07:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:47.100 23:07:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:47.100 23:07:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:47.100 23:07:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.100 23:07:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.100 23:07:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:47.100 ************************************ 00:06:47.100 START TEST driver 00:06:47.100 ************************************ 00:06:47.100 23:07:09 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:06:47.100 * Looking for test storage... 00:06:47.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:47.100 23:07:09 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:47.100 23:07:09 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:47.100 23:07:09 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:47.665 23:07:09 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:47.665 23:07:09 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.665 23:07:09 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.665 23:07:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:47.665 ************************************ 00:06:47.665 START TEST guess_driver 00:06:47.665 ************************************ 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:47.665 23:07:09 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:47.665 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:47.665 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:47.665 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:06:47.665 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:06:47.665 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:06:47.665 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:06:47.666 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:47.666 Looking for driver=uio_pci_generic 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:47.666 23:07:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:48.231 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:06:48.231 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:06:48.231 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:48.507 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:48.507 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:48.508 23:07:10 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:49.135 00:06:49.135 real 0m1.404s 00:06:49.135 user 0m0.495s 00:06:49.135 sys 0m0.899s 00:06:49.135 23:07:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.135 23:07:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 END TEST guess_driver 00:06:49.135 ************************************ 00:06:49.135 23:07:11 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:06:49.135 00:06:49.135 real 0m2.087s 00:06:49.135 user 0m0.711s 00:06:49.135 sys 0m1.418s 00:06:49.135 23:07:11 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.135 23:07:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 END TEST driver 00:06:49.135 ************************************ 00:06:49.135 23:07:11 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:49.135 23:07:11 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:49.135 23:07:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.135 23:07:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.135 23:07:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 START TEST devices 00:06:49.135 ************************************ 00:06:49.135 23:07:11 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:06:49.135 * Looking for test storage... 00:06:49.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:06:49.135 23:07:11 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:49.135 23:07:11 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:49.135 23:07:11 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:49.135 23:07:11 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:50.069 23:07:12 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:50.069 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:50.069 23:07:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:06:50.070 No valid GPT data, bailing 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:06:50.070 No valid GPT data, bailing 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:06:50.070 No valid GPT data, bailing 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:06:50.070 23:07:12 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:06:50.070 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:06:50.070 23:07:12 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:06:50.328 No valid GPT data, bailing 00:06:50.328 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:50.328 23:07:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:50.328 23:07:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:06:50.328 23:07:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:06:50.328 23:07:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:06:50.328 23:07:12 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:50.328 23:07:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:50.328 23:07:12 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.328 23:07:12 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.328 23:07:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:50.328 ************************************ 00:06:50.328 START TEST nvme_mount 00:06:50.328 ************************************ 00:06:50.328 23:07:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:06:50.328 23:07:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:50.328 23:07:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:50.328 23:07:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:50.328 23:07:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:50.328 23:07:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:50.329 23:07:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:51.262 Creating new GPT entries in memory. 00:06:51.262 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:51.262 other utilities. 00:06:51.262 23:07:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:51.262 23:07:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:51.262 23:07:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:51.262 23:07:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:51.262 23:07:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:52.634 Creating new GPT entries in memory. 00:06:52.634 The operation has completed successfully. 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57169 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:52.634 23:07:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.634 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:52.634 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:52.892 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:52.892 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:53.150 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:53.150 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:53.150 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:53.150 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:53.150 23:07:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:53.408 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.666 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:53.666 23:07:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:53.666 23:07:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:53.924 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:53.924 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:53.924 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:53.924 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:53.924 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:53.924 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:54.182 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:54.182 00:06:54.182 real 0m3.964s 00:06:54.182 user 0m0.696s 00:06:54.182 sys 0m1.010s 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.182 23:07:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:54.182 ************************************ 00:06:54.182 END TEST nvme_mount 00:06:54.182 ************************************ 00:06:54.182 23:07:16 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:54.182 23:07:16 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:54.182 23:07:16 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.182 23:07:16 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.182 23:07:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:54.182 ************************************ 00:06:54.182 START TEST dm_mount 00:06:54.182 ************************************ 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:54.182 23:07:16 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:55.568 Creating new GPT entries in memory. 00:06:55.568 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:55.568 other utilities. 00:06:55.568 23:07:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:55.568 23:07:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:55.568 23:07:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:55.568 23:07:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:55.568 23:07:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:06:56.503 Creating new GPT entries in memory. 00:06:56.503 The operation has completed successfully. 00:06:56.503 23:07:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:56.503 23:07:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:56.503 23:07:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:56.503 23:07:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:56.503 23:07:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:06:57.438 The operation has completed successfully. 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57596 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.438 23:07:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.698 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:57.957 23:07:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.214 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:58.472 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:58.472 00:06:58.472 real 0m4.241s 00:06:58.472 user 0m0.458s 00:06:58.472 sys 0m0.731s 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.472 23:07:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:58.472 ************************************ 00:06:58.472 END TEST dm_mount 00:06:58.472 ************************************ 00:06:58.472 23:07:20 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:58.472 23:07:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:58.730 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.730 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.730 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:58.730 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:58.730 23:07:21 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:58.988 00:06:58.988 real 0m9.739s 00:06:58.988 user 0m1.792s 00:06:58.988 sys 0m2.350s 00:06:58.988 23:07:21 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.988 23:07:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:58.988 ************************************ 00:06:58.988 END TEST devices 00:06:58.988 ************************************ 00:06:58.988 23:07:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:06:58.988 00:06:58.988 real 0m21.457s 00:06:58.988 user 0m6.781s 00:06:58.988 sys 0m9.046s 00:06:58.988 23:07:21 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.988 ************************************ 00:06:58.988 END TEST setup.sh 00:06:58.988 ************************************ 00:06:58.988 23:07:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:58.988 23:07:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:58.988 23:07:21 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:59.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.554 Hugepages 00:06:59.554 node hugesize free / total 00:06:59.555 node0 1048576kB 0 / 0 00:06:59.555 node0 2048kB 2048 / 2048 00:06:59.555 00:06:59.555 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:59.813 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:59.813 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:06:59.813 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:06:59.813 23:07:22 -- spdk/autotest.sh@130 -- # uname -s 00:06:59.813 23:07:22 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:59.813 23:07:22 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:59.813 23:07:22 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:00.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:00.651 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:00.651 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:00.651 23:07:23 -- common/autotest_common.sh@1532 -- # sleep 1 00:07:01.615 23:07:24 -- common/autotest_common.sh@1533 -- # bdfs=() 00:07:01.615 23:07:24 -- common/autotest_common.sh@1533 -- # local bdfs 00:07:01.615 23:07:24 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:07:01.615 23:07:24 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:07:01.615 23:07:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:01.615 23:07:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:01.615 23:07:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:01.615 23:07:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.615 23:07:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:01.874 23:07:24 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:01.874 23:07:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:01.874 23:07:24 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:02.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:02.131 Waiting for block devices as requested 00:07:02.131 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.390 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:02.390 23:07:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:02.390 23:07:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:07:02.390 23:07:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:02.390 23:07:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:02.390 23:07:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:02.390 23:07:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1557 -- # continue 00:07:02.390 23:07:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:07:02.390 23:07:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:02.390 23:07:24 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:07:02.390 23:07:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:07:02.390 23:07:24 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:07:02.390 23:07:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:07:02.390 23:07:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:07:02.390 23:07:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:07:02.390 23:07:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:07:02.390 23:07:24 -- common/autotest_common.sh@1557 -- # continue 00:07:02.390 23:07:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:07:02.390 23:07:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:02.390 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:02.390 23:07:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:07:02.390 23:07:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:02.390 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:02.390 23:07:24 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:03.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:03.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:03.323 23:07:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:07:03.323 23:07:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.323 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.323 23:07:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:07:03.323 23:07:25 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:07:03.323 23:07:25 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:07:03.323 23:07:25 -- common/autotest_common.sh@1577 -- # bdfs=() 00:07:03.323 23:07:25 -- common/autotest_common.sh@1577 -- # local bdfs 00:07:03.323 23:07:25 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:07:03.323 23:07:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:07:03.323 23:07:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:07:03.323 23:07:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:03.323 23:07:25 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:03.323 23:07:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:07:03.323 23:07:25 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:07:03.323 23:07:25 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:03.323 23:07:25 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:03.323 23:07:25 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:03.323 23:07:25 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:03.323 23:07:25 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:03.323 23:07:25 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:07:03.323 23:07:25 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:03.323 23:07:25 -- common/autotest_common.sh@1580 -- # device=0x0010 00:07:03.323 23:07:25 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:03.323 23:07:25 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:07:03.323 23:07:25 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:07:03.323 23:07:25 -- common/autotest_common.sh@1593 -- # return 0 00:07:03.323 23:07:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:07:03.323 23:07:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:03.323 23:07:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:03.323 23:07:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:03.323 23:07:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:03.323 23:07:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.323 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.323 23:07:25 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:07:03.323 23:07:25 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:07:03.323 23:07:25 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:07:03.323 23:07:25 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:03.323 23:07:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.323 23:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.323 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.323 ************************************ 00:07:03.323 START TEST env 00:07:03.323 ************************************ 00:07:03.323 23:07:25 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:03.581 * Looking for test storage... 00:07:03.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:03.581 23:07:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:03.581 23:07:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.581 23:07:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.581 23:07:25 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.581 ************************************ 00:07:03.581 START TEST env_memory 00:07:03.581 ************************************ 00:07:03.581 23:07:25 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:03.581 00:07:03.581 00:07:03.581 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.581 http://cunit.sourceforge.net/ 00:07:03.581 00:07:03.581 00:07:03.581 Suite: memory 00:07:03.581 Test: alloc and free memory map ...[2024-07-24 23:07:25.937892] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:03.581 passed 00:07:03.581 Test: mem map translation ...[2024-07-24 23:07:25.969044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:03.581 [2024-07-24 23:07:25.969112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:03.581 [2024-07-24 23:07:25.969181] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:03.581 [2024-07-24 23:07:25.969194] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:03.581 passed 00:07:03.581 Test: mem map registration ...[2024-07-24 23:07:26.033692] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:03.581 [2024-07-24 23:07:26.033751] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:03.581 passed 00:07:03.838 Test: mem map adjacent registrations ...passed 00:07:03.838 00:07:03.838 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.838 suites 1 1 n/a 0 0 00:07:03.838 tests 4 4 4 0 0 00:07:03.838 asserts 152 152 152 0 n/a 00:07:03.838 00:07:03.838 Elapsed time = 0.218 seconds 00:07:03.838 00:07:03.838 real 0m0.233s 00:07:03.838 user 0m0.220s 00:07:03.838 sys 0m0.010s 00:07:03.838 23:07:26 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.838 ************************************ 00:07:03.839 END TEST env_memory 00:07:03.839 ************************************ 00:07:03.839 23:07:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:03.839 23:07:26 env -- common/autotest_common.sh@1142 -- # return 0 00:07:03.839 23:07:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:03.839 23:07:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.839 23:07:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.839 23:07:26 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.839 ************************************ 00:07:03.839 START TEST env_vtophys 00:07:03.839 ************************************ 00:07:03.839 23:07:26 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:03.839 EAL: lib.eal log level changed from notice to debug 00:07:03.839 EAL: Detected lcore 0 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 1 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 2 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 3 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 4 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 5 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 6 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 7 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 8 as core 0 on socket 0 00:07:03.839 EAL: Detected lcore 9 as core 0 on socket 0 00:07:03.839 EAL: Maximum logical cores by configuration: 128 00:07:03.839 EAL: Detected CPU lcores: 10 00:07:03.839 EAL: Detected NUMA nodes: 1 00:07:03.839 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:03.839 EAL: Detected shared linkage of DPDK 00:07:03.839 EAL: No shared files mode enabled, IPC will be disabled 00:07:03.839 EAL: Selected IOVA mode 'PA' 00:07:03.839 EAL: Probing VFIO support... 00:07:03.839 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:03.839 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:03.839 EAL: Ask a virtual area of 0x2e000 bytes 00:07:03.839 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:03.839 EAL: Setting up physically contiguous memory... 00:07:03.839 EAL: Setting maximum number of open files to 524288 00:07:03.839 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:03.839 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:03.839 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.839 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:03.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.839 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.839 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:03.839 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:03.839 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.839 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:03.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.839 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.839 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:03.839 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:03.839 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.839 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:03.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.839 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.839 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:03.839 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:03.839 EAL: Ask a virtual area of 0x61000 bytes 00:07:03.839 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:03.839 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:03.839 EAL: Ask a virtual area of 0x400000000 bytes 00:07:03.839 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:03.839 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:03.839 EAL: Hugepages will be freed exactly as allocated. 00:07:03.839 EAL: No shared files mode enabled, IPC is disabled 00:07:03.839 EAL: No shared files mode enabled, IPC is disabled 00:07:03.839 EAL: TSC frequency is ~2200000 KHz 00:07:03.839 EAL: Main lcore 0 is ready (tid=7f0a746afa00;cpuset=[0]) 00:07:03.839 EAL: Trying to obtain current memory policy. 00:07:03.839 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.839 EAL: Restoring previous memory policy: 0 00:07:03.839 EAL: request: mp_malloc_sync 00:07:03.839 EAL: No shared files mode enabled, IPC is disabled 00:07:03.839 EAL: Heap on socket 0 was expanded by 2MB 00:07:03.839 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:04.097 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:04.097 EAL: Mem event callback 'spdk:(nil)' registered 00:07:04.097 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:04.097 00:07:04.097 00:07:04.097 CUnit - A unit testing framework for C - Version 2.1-3 00:07:04.097 http://cunit.sourceforge.net/ 00:07:04.097 00:07:04.097 00:07:04.097 Suite: components_suite 00:07:04.097 Test: vtophys_malloc_test ...passed 00:07:04.097 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 4MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 4MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 6MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 6MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 10MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 10MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 18MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 18MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 34MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 34MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 66MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 66MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 130MB 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was shrunk by 130MB 00:07:04.097 EAL: Trying to obtain current memory policy. 00:07:04.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.097 EAL: Restoring previous memory policy: 4 00:07:04.097 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.097 EAL: request: mp_malloc_sync 00:07:04.097 EAL: No shared files mode enabled, IPC is disabled 00:07:04.097 EAL: Heap on socket 0 was expanded by 258MB 00:07:04.355 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.355 EAL: request: mp_malloc_sync 00:07:04.355 EAL: No shared files mode enabled, IPC is disabled 00:07:04.355 EAL: Heap on socket 0 was shrunk by 258MB 00:07:04.355 EAL: Trying to obtain current memory policy. 00:07:04.355 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.355 EAL: Restoring previous memory policy: 4 00:07:04.355 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.355 EAL: request: mp_malloc_sync 00:07:04.355 EAL: No shared files mode enabled, IPC is disabled 00:07:04.355 EAL: Heap on socket 0 was expanded by 514MB 00:07:04.615 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.615 EAL: request: mp_malloc_sync 00:07:04.615 EAL: No shared files mode enabled, IPC is disabled 00:07:04.615 EAL: Heap on socket 0 was shrunk by 514MB 00:07:04.615 EAL: Trying to obtain current memory policy. 00:07:04.615 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.880 EAL: Restoring previous memory policy: 4 00:07:04.880 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.880 EAL: request: mp_malloc_sync 00:07:04.880 EAL: No shared files mode enabled, IPC is disabled 00:07:04.880 EAL: Heap on socket 0 was expanded by 1026MB 00:07:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:07:05.396 passed 00:07:05.396 00:07:05.396 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.396 suites 1 1 n/a 0 0 00:07:05.396 tests 2 2 2 0 0 00:07:05.396 asserts 5358 5358 5358 0 n/a 00:07:05.396 00:07:05.396 Elapsed time = 1.352 seconds 00:07:05.396 EAL: request: mp_malloc_sync 00:07:05.396 EAL: No shared files mode enabled, IPC is disabled 00:07:05.396 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:05.396 EAL: Calling mem event callback 'spdk:(nil)' 00:07:05.396 EAL: request: mp_malloc_sync 00:07:05.396 EAL: No shared files mode enabled, IPC is disabled 00:07:05.396 EAL: Heap on socket 0 was shrunk by 2MB 00:07:05.396 EAL: No shared files mode enabled, IPC is disabled 00:07:05.396 EAL: No shared files mode enabled, IPC is disabled 00:07:05.396 EAL: No shared files mode enabled, IPC is disabled 00:07:05.396 00:07:05.396 real 0m1.554s 00:07:05.396 user 0m0.839s 00:07:05.396 sys 0m0.583s 00:07:05.396 23:07:27 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.396 23:07:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:05.396 ************************************ 00:07:05.396 END TEST env_vtophys 00:07:05.396 ************************************ 00:07:05.396 23:07:27 env -- common/autotest_common.sh@1142 -- # return 0 00:07:05.396 23:07:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:05.396 23:07:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.396 23:07:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.396 23:07:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.396 ************************************ 00:07:05.396 START TEST env_pci 00:07:05.396 ************************************ 00:07:05.396 23:07:27 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:05.396 00:07:05.396 00:07:05.396 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.396 http://cunit.sourceforge.net/ 00:07:05.396 00:07:05.397 00:07:05.397 Suite: pci 00:07:05.397 Test: pci_hook ...[2024-07-24 23:07:27.798577] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58790 has claimed it 00:07:05.397 passed 00:07:05.397 00:07:05.397 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.397 suites 1 1 n/a 0 0 00:07:05.397 tests 1 1 1 0 0 00:07:05.397 asserts 25 25 25 0 n/a 00:07:05.397 00:07:05.397 Elapsed time = 0.002 seconds 00:07:05.397 EAL: Cannot find device (10000:00:01.0) 00:07:05.397 EAL: Failed to attach device on primary process 00:07:05.397 00:07:05.397 real 0m0.019s 00:07:05.397 user 0m0.009s 00:07:05.397 sys 0m0.010s 00:07:05.397 23:07:27 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.397 ************************************ 00:07:05.397 END TEST env_pci 00:07:05.397 ************************************ 00:07:05.397 23:07:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:05.397 23:07:27 env -- common/autotest_common.sh@1142 -- # return 0 00:07:05.397 23:07:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:05.397 23:07:27 env -- env/env.sh@15 -- # uname 00:07:05.397 23:07:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:05.397 23:07:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:05.397 23:07:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:05.397 23:07:27 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:05.397 23:07:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.397 23:07:27 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.397 ************************************ 00:07:05.397 START TEST env_dpdk_post_init 00:07:05.397 ************************************ 00:07:05.397 23:07:27 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:05.655 EAL: Detected CPU lcores: 10 00:07:05.655 EAL: Detected NUMA nodes: 1 00:07:05.655 EAL: Detected shared linkage of DPDK 00:07:05.655 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.655 EAL: Selected IOVA mode 'PA' 00:07:05.655 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:05.655 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:05.655 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:05.655 Starting DPDK initialization... 00:07:05.655 Starting SPDK post initialization... 00:07:05.655 SPDK NVMe probe 00:07:05.655 Attaching to 0000:00:10.0 00:07:05.655 Attaching to 0000:00:11.0 00:07:05.655 Attached to 0000:00:10.0 00:07:05.655 Attached to 0000:00:11.0 00:07:05.655 Cleaning up... 00:07:05.655 00:07:05.655 real 0m0.179s 00:07:05.655 user 0m0.040s 00:07:05.655 sys 0m0.039s 00:07:05.655 23:07:28 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.655 ************************************ 00:07:05.655 END TEST env_dpdk_post_init 00:07:05.655 ************************************ 00:07:05.655 23:07:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.655 23:07:28 env -- common/autotest_common.sh@1142 -- # return 0 00:07:05.655 23:07:28 env -- env/env.sh@26 -- # uname 00:07:05.655 23:07:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:05.655 23:07:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.655 23:07:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.655 23:07:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.655 23:07:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.655 ************************************ 00:07:05.655 START TEST env_mem_callbacks 00:07:05.656 ************************************ 00:07:05.656 23:07:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.656 EAL: Detected CPU lcores: 10 00:07:05.656 EAL: Detected NUMA nodes: 1 00:07:05.656 EAL: Detected shared linkage of DPDK 00:07:05.656 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.914 EAL: Selected IOVA mode 'PA' 00:07:05.914 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:05.914 00:07:05.914 00:07:05.914 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.914 http://cunit.sourceforge.net/ 00:07:05.914 00:07:05.914 00:07:05.914 Suite: memory 00:07:05.914 Test: test ... 00:07:05.914 register 0x200000200000 2097152 00:07:05.914 malloc 3145728 00:07:05.914 register 0x200000400000 4194304 00:07:05.914 buf 0x200000500000 len 3145728 PASSED 00:07:05.914 malloc 64 00:07:05.914 buf 0x2000004fff40 len 64 PASSED 00:07:05.914 malloc 4194304 00:07:05.914 register 0x200000800000 6291456 00:07:05.914 buf 0x200000a00000 len 4194304 PASSED 00:07:05.914 free 0x200000500000 3145728 00:07:05.914 free 0x2000004fff40 64 00:07:05.914 unregister 0x200000400000 4194304 PASSED 00:07:05.914 free 0x200000a00000 4194304 00:07:05.914 unregister 0x200000800000 6291456 PASSED 00:07:05.914 malloc 8388608 00:07:05.914 register 0x200000400000 10485760 00:07:05.914 buf 0x200000600000 len 8388608 PASSED 00:07:05.914 free 0x200000600000 8388608 00:07:05.914 unregister 0x200000400000 10485760 PASSED 00:07:05.914 passed 00:07:05.914 00:07:05.914 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.914 suites 1 1 n/a 0 0 00:07:05.914 tests 1 1 1 0 0 00:07:05.914 asserts 15 15 15 0 n/a 00:07:05.914 00:07:05.914 Elapsed time = 0.009 seconds 00:07:05.914 00:07:05.914 real 0m0.157s 00:07:05.914 user 0m0.024s 00:07:05.914 sys 0m0.031s 00:07:05.914 23:07:28 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.914 23:07:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:05.914 ************************************ 00:07:05.914 END TEST env_mem_callbacks 00:07:05.914 ************************************ 00:07:05.914 23:07:28 env -- common/autotest_common.sh@1142 -- # return 0 00:07:05.914 00:07:05.914 real 0m2.504s 00:07:05.914 user 0m1.258s 00:07:05.914 sys 0m0.896s 00:07:05.914 23:07:28 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.914 ************************************ 00:07:05.914 END TEST env 00:07:05.914 ************************************ 00:07:05.914 23:07:28 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.914 23:07:28 -- common/autotest_common.sh@1142 -- # return 0 00:07:05.914 23:07:28 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:05.914 23:07:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.914 23:07:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.914 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.914 ************************************ 00:07:05.914 START TEST rpc 00:07:05.914 ************************************ 00:07:05.914 23:07:28 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:06.173 * Looking for test storage... 00:07:06.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:06.173 23:07:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58900 00:07:06.173 23:07:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.173 23:07:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:06.173 23:07:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58900 00:07:06.173 23:07:28 rpc -- common/autotest_common.sh@829 -- # '[' -z 58900 ']' 00:07:06.173 23:07:28 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.173 23:07:28 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.173 23:07:28 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.173 23:07:28 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.173 23:07:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.173 [2024-07-24 23:07:28.503865] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:06.173 [2024-07-24 23:07:28.503979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58900 ] 00:07:06.173 [2024-07-24 23:07:28.643224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.432 [2024-07-24 23:07:28.780901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:06.432 [2024-07-24 23:07:28.780981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58900' to capture a snapshot of events at runtime. 00:07:06.432 [2024-07-24 23:07:28.780997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.432 [2024-07-24 23:07:28.781009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.432 [2024-07-24 23:07:28.781019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58900 for offline analysis/debug. 00:07:06.432 [2024-07-24 23:07:28.781060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.432 [2024-07-24 23:07:28.843332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.997 23:07:29 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.997 23:07:29 rpc -- common/autotest_common.sh@862 -- # return 0 00:07:06.997 23:07:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:06.997 23:07:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:06.997 23:07:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:06.997 23:07:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:06.997 23:07:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.997 23:07:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.997 23:07:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.997 ************************************ 00:07:06.997 START TEST rpc_integrity 00:07:06.997 ************************************ 00:07:06.997 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:06.997 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:06.997 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.997 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.997 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.997 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:06.997 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:06.997 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:06.997 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.255 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.255 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.255 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.255 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:07.255 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.255 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.255 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.255 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.255 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.255 { 00:07:07.255 "name": "Malloc0", 00:07:07.255 "aliases": [ 00:07:07.255 "3b40be7c-0d75-458e-917f-fe300fe812d5" 00:07:07.255 ], 00:07:07.255 "product_name": "Malloc disk", 00:07:07.255 "block_size": 512, 00:07:07.255 "num_blocks": 16384, 00:07:07.255 "uuid": "3b40be7c-0d75-458e-917f-fe300fe812d5", 00:07:07.255 "assigned_rate_limits": { 00:07:07.255 "rw_ios_per_sec": 0, 00:07:07.255 "rw_mbytes_per_sec": 0, 00:07:07.255 "r_mbytes_per_sec": 0, 00:07:07.255 "w_mbytes_per_sec": 0 00:07:07.255 }, 00:07:07.255 "claimed": false, 00:07:07.255 "zoned": false, 00:07:07.255 "supported_io_types": { 00:07:07.255 "read": true, 00:07:07.255 "write": true, 00:07:07.255 "unmap": true, 00:07:07.255 "flush": true, 00:07:07.255 "reset": true, 00:07:07.255 "nvme_admin": false, 00:07:07.255 "nvme_io": false, 00:07:07.255 "nvme_io_md": false, 00:07:07.255 "write_zeroes": true, 00:07:07.255 "zcopy": true, 00:07:07.255 "get_zone_info": false, 00:07:07.255 "zone_management": false, 00:07:07.255 "zone_append": false, 00:07:07.255 "compare": false, 00:07:07.255 "compare_and_write": false, 00:07:07.255 "abort": true, 00:07:07.255 "seek_hole": false, 00:07:07.255 "seek_data": false, 00:07:07.255 "copy": true, 00:07:07.255 "nvme_iov_md": false 00:07:07.255 }, 00:07:07.255 "memory_domains": [ 00:07:07.255 { 00:07:07.255 "dma_device_id": "system", 00:07:07.255 "dma_device_type": 1 00:07:07.255 }, 00:07:07.255 { 00:07:07.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.255 "dma_device_type": 2 00:07:07.255 } 00:07:07.255 ], 00:07:07.255 "driver_specific": {} 00:07:07.255 } 00:07:07.256 ]' 00:07:07.256 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.256 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.256 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:07.256 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.256 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.256 [2024-07-24 23:07:29.574013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:07.256 [2024-07-24 23:07:29.574072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.256 [2024-07-24 23:07:29.574097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b01da0 00:07:07.256 [2024-07-24 23:07:29.574114] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.256 [2024-07-24 23:07:29.575853] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.256 [2024-07-24 23:07:29.575890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.256 Passthru0 00:07:07.256 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.256 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.256 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.256 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.256 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.256 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.256 { 00:07:07.256 "name": "Malloc0", 00:07:07.256 "aliases": [ 00:07:07.256 "3b40be7c-0d75-458e-917f-fe300fe812d5" 00:07:07.256 ], 00:07:07.256 "product_name": "Malloc disk", 00:07:07.256 "block_size": 512, 00:07:07.256 "num_blocks": 16384, 00:07:07.256 "uuid": "3b40be7c-0d75-458e-917f-fe300fe812d5", 00:07:07.256 "assigned_rate_limits": { 00:07:07.256 "rw_ios_per_sec": 0, 00:07:07.256 "rw_mbytes_per_sec": 0, 00:07:07.256 "r_mbytes_per_sec": 0, 00:07:07.256 "w_mbytes_per_sec": 0 00:07:07.256 }, 00:07:07.256 "claimed": true, 00:07:07.256 "claim_type": "exclusive_write", 00:07:07.256 "zoned": false, 00:07:07.256 "supported_io_types": { 00:07:07.256 "read": true, 00:07:07.256 "write": true, 00:07:07.256 "unmap": true, 00:07:07.256 "flush": true, 00:07:07.256 "reset": true, 00:07:07.256 "nvme_admin": false, 00:07:07.256 "nvme_io": false, 00:07:07.256 "nvme_io_md": false, 00:07:07.256 "write_zeroes": true, 00:07:07.256 "zcopy": true, 00:07:07.256 "get_zone_info": false, 00:07:07.256 "zone_management": false, 00:07:07.256 "zone_append": false, 00:07:07.256 "compare": false, 00:07:07.256 "compare_and_write": false, 00:07:07.256 "abort": true, 00:07:07.256 "seek_hole": false, 00:07:07.256 "seek_data": false, 00:07:07.256 "copy": true, 00:07:07.256 "nvme_iov_md": false 00:07:07.256 }, 00:07:07.256 "memory_domains": [ 00:07:07.256 { 00:07:07.256 "dma_device_id": "system", 00:07:07.256 "dma_device_type": 1 00:07:07.256 }, 00:07:07.256 { 00:07:07.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.256 "dma_device_type": 2 00:07:07.256 } 00:07:07.256 ], 00:07:07.256 "driver_specific": {} 00:07:07.256 }, 00:07:07.256 { 00:07:07.256 "name": "Passthru0", 00:07:07.256 "aliases": [ 00:07:07.256 "1ec3304d-7801-5b8f-a40c-d20cef259f44" 00:07:07.256 ], 00:07:07.256 "product_name": "passthru", 00:07:07.256 "block_size": 512, 00:07:07.256 "num_blocks": 16384, 00:07:07.256 "uuid": "1ec3304d-7801-5b8f-a40c-d20cef259f44", 00:07:07.256 "assigned_rate_limits": { 00:07:07.256 "rw_ios_per_sec": 0, 00:07:07.256 "rw_mbytes_per_sec": 0, 00:07:07.256 "r_mbytes_per_sec": 0, 00:07:07.256 "w_mbytes_per_sec": 0 00:07:07.256 }, 00:07:07.256 "claimed": false, 00:07:07.256 "zoned": false, 00:07:07.256 "supported_io_types": { 00:07:07.256 "read": true, 00:07:07.256 "write": true, 00:07:07.256 "unmap": true, 00:07:07.256 "flush": true, 00:07:07.256 "reset": true, 00:07:07.256 "nvme_admin": false, 00:07:07.256 "nvme_io": false, 00:07:07.256 "nvme_io_md": false, 00:07:07.256 "write_zeroes": true, 00:07:07.256 "zcopy": true, 00:07:07.256 "get_zone_info": false, 00:07:07.256 "zone_management": false, 00:07:07.256 "zone_append": false, 00:07:07.256 "compare": false, 00:07:07.256 "compare_and_write": false, 00:07:07.256 "abort": true, 00:07:07.256 "seek_hole": false, 00:07:07.256 "seek_data": false, 00:07:07.256 "copy": true, 00:07:07.256 "nvme_iov_md": false 00:07:07.256 }, 00:07:07.256 "memory_domains": [ 00:07:07.256 { 00:07:07.256 "dma_device_id": "system", 00:07:07.256 "dma_device_type": 1 00:07:07.256 }, 00:07:07.256 { 00:07:07.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.257 "dma_device_type": 2 00:07:07.257 } 00:07:07.257 ], 00:07:07.257 "driver_specific": { 00:07:07.257 "passthru": { 00:07:07.257 "name": "Passthru0", 00:07:07.257 "base_bdev_name": "Malloc0" 00:07:07.257 } 00:07:07.257 } 00:07:07.257 } 00:07:07.257 ]' 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.257 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.257 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.588 23:07:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.588 00:07:07.588 real 0m0.334s 00:07:07.588 user 0m0.229s 00:07:07.588 sys 0m0.041s 00:07:07.588 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.588 23:07:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.588 ************************************ 00:07:07.588 END TEST rpc_integrity 00:07:07.588 ************************************ 00:07:07.588 23:07:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:07.588 23:07:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:07.588 23:07:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.588 23:07:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.588 23:07:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.588 ************************************ 00:07:07.588 START TEST rpc_plugins 00:07:07.588 ************************************ 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:07:07.588 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.588 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:07.588 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.588 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.588 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:07.588 { 00:07:07.588 "name": "Malloc1", 00:07:07.588 "aliases": [ 00:07:07.588 "2a237383-7ec3-4ca2-9920-278b5aff8df4" 00:07:07.588 ], 00:07:07.588 "product_name": "Malloc disk", 00:07:07.588 "block_size": 4096, 00:07:07.588 "num_blocks": 256, 00:07:07.588 "uuid": "2a237383-7ec3-4ca2-9920-278b5aff8df4", 00:07:07.588 "assigned_rate_limits": { 00:07:07.588 "rw_ios_per_sec": 0, 00:07:07.588 "rw_mbytes_per_sec": 0, 00:07:07.588 "r_mbytes_per_sec": 0, 00:07:07.588 "w_mbytes_per_sec": 0 00:07:07.588 }, 00:07:07.588 "claimed": false, 00:07:07.588 "zoned": false, 00:07:07.588 "supported_io_types": { 00:07:07.588 "read": true, 00:07:07.589 "write": true, 00:07:07.589 "unmap": true, 00:07:07.589 "flush": true, 00:07:07.589 "reset": true, 00:07:07.589 "nvme_admin": false, 00:07:07.589 "nvme_io": false, 00:07:07.589 "nvme_io_md": false, 00:07:07.589 "write_zeroes": true, 00:07:07.589 "zcopy": true, 00:07:07.589 "get_zone_info": false, 00:07:07.589 "zone_management": false, 00:07:07.589 "zone_append": false, 00:07:07.589 "compare": false, 00:07:07.589 "compare_and_write": false, 00:07:07.589 "abort": true, 00:07:07.589 "seek_hole": false, 00:07:07.589 "seek_data": false, 00:07:07.589 "copy": true, 00:07:07.589 "nvme_iov_md": false 00:07:07.589 }, 00:07:07.589 "memory_domains": [ 00:07:07.589 { 00:07:07.589 "dma_device_id": "system", 00:07:07.589 "dma_device_type": 1 00:07:07.589 }, 00:07:07.589 { 00:07:07.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.589 "dma_device_type": 2 00:07:07.589 } 00:07:07.589 ], 00:07:07.589 "driver_specific": {} 00:07:07.589 } 00:07:07.589 ]' 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:07.589 23:07:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:07.589 00:07:07.589 real 0m0.158s 00:07:07.589 user 0m0.099s 00:07:07.589 sys 0m0.024s 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.589 23:07:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.589 ************************************ 00:07:07.589 END TEST rpc_plugins 00:07:07.589 ************************************ 00:07:07.589 23:07:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:07.589 23:07:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:07.589 23:07:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.589 23:07:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.589 23:07:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.589 ************************************ 00:07:07.589 START TEST rpc_trace_cmd_test 00:07:07.589 ************************************ 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:07.589 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58900", 00:07:07.589 "tpoint_group_mask": "0x8", 00:07:07.589 "iscsi_conn": { 00:07:07.589 "mask": "0x2", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "scsi": { 00:07:07.589 "mask": "0x4", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "bdev": { 00:07:07.589 "mask": "0x8", 00:07:07.589 "tpoint_mask": "0xffffffffffffffff" 00:07:07.589 }, 00:07:07.589 "nvmf_rdma": { 00:07:07.589 "mask": "0x10", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "nvmf_tcp": { 00:07:07.589 "mask": "0x20", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "ftl": { 00:07:07.589 "mask": "0x40", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "blobfs": { 00:07:07.589 "mask": "0x80", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "dsa": { 00:07:07.589 "mask": "0x200", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "thread": { 00:07:07.589 "mask": "0x400", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "nvme_pcie": { 00:07:07.589 "mask": "0x800", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "iaa": { 00:07:07.589 "mask": "0x1000", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "nvme_tcp": { 00:07:07.589 "mask": "0x2000", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "bdev_nvme": { 00:07:07.589 "mask": "0x4000", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 }, 00:07:07.589 "sock": { 00:07:07.589 "mask": "0x8000", 00:07:07.589 "tpoint_mask": "0x0" 00:07:07.589 } 00:07:07.589 }' 00:07:07.589 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:07.851 00:07:07.851 real 0m0.255s 00:07:07.851 user 0m0.221s 00:07:07.851 sys 0m0.027s 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.851 23:07:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.851 ************************************ 00:07:07.851 END TEST rpc_trace_cmd_test 00:07:07.851 ************************************ 00:07:07.851 23:07:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:07.851 23:07:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:07.851 23:07:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:07.851 23:07:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:07.851 23:07:30 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.851 23:07:30 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.851 23:07:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.851 ************************************ 00:07:07.851 START TEST rpc_daemon_integrity 00:07:07.851 ************************************ 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.851 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:08.218 { 00:07:08.218 "name": "Malloc2", 00:07:08.218 "aliases": [ 00:07:08.218 "0fc83794-277d-49b6-aedd-b387e8187c5b" 00:07:08.218 ], 00:07:08.218 "product_name": "Malloc disk", 00:07:08.218 "block_size": 512, 00:07:08.218 "num_blocks": 16384, 00:07:08.218 "uuid": "0fc83794-277d-49b6-aedd-b387e8187c5b", 00:07:08.218 "assigned_rate_limits": { 00:07:08.218 "rw_ios_per_sec": 0, 00:07:08.218 "rw_mbytes_per_sec": 0, 00:07:08.218 "r_mbytes_per_sec": 0, 00:07:08.218 "w_mbytes_per_sec": 0 00:07:08.218 }, 00:07:08.218 "claimed": false, 00:07:08.218 "zoned": false, 00:07:08.218 "supported_io_types": { 00:07:08.218 "read": true, 00:07:08.218 "write": true, 00:07:08.218 "unmap": true, 00:07:08.218 "flush": true, 00:07:08.218 "reset": true, 00:07:08.218 "nvme_admin": false, 00:07:08.218 "nvme_io": false, 00:07:08.218 "nvme_io_md": false, 00:07:08.218 "write_zeroes": true, 00:07:08.218 "zcopy": true, 00:07:08.218 "get_zone_info": false, 00:07:08.218 "zone_management": false, 00:07:08.218 "zone_append": false, 00:07:08.218 "compare": false, 00:07:08.218 "compare_and_write": false, 00:07:08.218 "abort": true, 00:07:08.218 "seek_hole": false, 00:07:08.218 "seek_data": false, 00:07:08.218 "copy": true, 00:07:08.218 "nvme_iov_md": false 00:07:08.218 }, 00:07:08.218 "memory_domains": [ 00:07:08.218 { 00:07:08.218 "dma_device_id": "system", 00:07:08.218 "dma_device_type": 1 00:07:08.218 }, 00:07:08.218 { 00:07:08.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.218 "dma_device_type": 2 00:07:08.218 } 00:07:08.218 ], 00:07:08.218 "driver_specific": {} 00:07:08.218 } 00:07:08.218 ]' 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.218 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.218 [2024-07-24 23:07:30.459233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:08.218 [2024-07-24 23:07:30.459311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.218 [2024-07-24 23:07:30.459336] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b66be0 00:07:08.218 [2024-07-24 23:07:30.459347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.218 [2024-07-24 23:07:30.461279] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.219 [2024-07-24 23:07:30.461313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:08.219 Passthru0 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:08.219 { 00:07:08.219 "name": "Malloc2", 00:07:08.219 "aliases": [ 00:07:08.219 "0fc83794-277d-49b6-aedd-b387e8187c5b" 00:07:08.219 ], 00:07:08.219 "product_name": "Malloc disk", 00:07:08.219 "block_size": 512, 00:07:08.219 "num_blocks": 16384, 00:07:08.219 "uuid": "0fc83794-277d-49b6-aedd-b387e8187c5b", 00:07:08.219 "assigned_rate_limits": { 00:07:08.219 "rw_ios_per_sec": 0, 00:07:08.219 "rw_mbytes_per_sec": 0, 00:07:08.219 "r_mbytes_per_sec": 0, 00:07:08.219 "w_mbytes_per_sec": 0 00:07:08.219 }, 00:07:08.219 "claimed": true, 00:07:08.219 "claim_type": "exclusive_write", 00:07:08.219 "zoned": false, 00:07:08.219 "supported_io_types": { 00:07:08.219 "read": true, 00:07:08.219 "write": true, 00:07:08.219 "unmap": true, 00:07:08.219 "flush": true, 00:07:08.219 "reset": true, 00:07:08.219 "nvme_admin": false, 00:07:08.219 "nvme_io": false, 00:07:08.219 "nvme_io_md": false, 00:07:08.219 "write_zeroes": true, 00:07:08.219 "zcopy": true, 00:07:08.219 "get_zone_info": false, 00:07:08.219 "zone_management": false, 00:07:08.219 "zone_append": false, 00:07:08.219 "compare": false, 00:07:08.219 "compare_and_write": false, 00:07:08.219 "abort": true, 00:07:08.219 "seek_hole": false, 00:07:08.219 "seek_data": false, 00:07:08.219 "copy": true, 00:07:08.219 "nvme_iov_md": false 00:07:08.219 }, 00:07:08.219 "memory_domains": [ 00:07:08.219 { 00:07:08.219 "dma_device_id": "system", 00:07:08.219 "dma_device_type": 1 00:07:08.219 }, 00:07:08.219 { 00:07:08.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.219 "dma_device_type": 2 00:07:08.219 } 00:07:08.219 ], 00:07:08.219 "driver_specific": {} 00:07:08.219 }, 00:07:08.219 { 00:07:08.219 "name": "Passthru0", 00:07:08.219 "aliases": [ 00:07:08.219 "8b6f9cb7-b8ab-5328-b070-67474907bc1e" 00:07:08.219 ], 00:07:08.219 "product_name": "passthru", 00:07:08.219 "block_size": 512, 00:07:08.219 "num_blocks": 16384, 00:07:08.219 "uuid": "8b6f9cb7-b8ab-5328-b070-67474907bc1e", 00:07:08.219 "assigned_rate_limits": { 00:07:08.219 "rw_ios_per_sec": 0, 00:07:08.219 "rw_mbytes_per_sec": 0, 00:07:08.219 "r_mbytes_per_sec": 0, 00:07:08.219 "w_mbytes_per_sec": 0 00:07:08.219 }, 00:07:08.219 "claimed": false, 00:07:08.219 "zoned": false, 00:07:08.219 "supported_io_types": { 00:07:08.219 "read": true, 00:07:08.219 "write": true, 00:07:08.219 "unmap": true, 00:07:08.219 "flush": true, 00:07:08.219 "reset": true, 00:07:08.219 "nvme_admin": false, 00:07:08.219 "nvme_io": false, 00:07:08.219 "nvme_io_md": false, 00:07:08.219 "write_zeroes": true, 00:07:08.219 "zcopy": true, 00:07:08.219 "get_zone_info": false, 00:07:08.219 "zone_management": false, 00:07:08.219 "zone_append": false, 00:07:08.219 "compare": false, 00:07:08.219 "compare_and_write": false, 00:07:08.219 "abort": true, 00:07:08.219 "seek_hole": false, 00:07:08.219 "seek_data": false, 00:07:08.219 "copy": true, 00:07:08.219 "nvme_iov_md": false 00:07:08.219 }, 00:07:08.219 "memory_domains": [ 00:07:08.219 { 00:07:08.219 "dma_device_id": "system", 00:07:08.219 "dma_device_type": 1 00:07:08.219 }, 00:07:08.219 { 00:07:08.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.219 "dma_device_type": 2 00:07:08.219 } 00:07:08.219 ], 00:07:08.219 "driver_specific": { 00:07:08.219 "passthru": { 00:07:08.219 "name": "Passthru0", 00:07:08.219 "base_bdev_name": "Malloc2" 00:07:08.219 } 00:07:08.219 } 00:07:08.219 } 00:07:08.219 ]' 00:07:08.219 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:08.220 00:07:08.220 real 0m0.326s 00:07:08.220 user 0m0.214s 00:07:08.220 sys 0m0.047s 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.220 23:07:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:08.220 ************************************ 00:07:08.220 END TEST rpc_daemon_integrity 00:07:08.220 ************************************ 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:08.220 23:07:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:08.220 23:07:30 rpc -- rpc/rpc.sh@84 -- # killprocess 58900 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 58900 ']' 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@952 -- # kill -0 58900 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@953 -- # uname 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58900 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.220 killing process with pid 58900 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58900' 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@967 -- # kill 58900 00:07:08.220 23:07:30 rpc -- common/autotest_common.sh@972 -- # wait 58900 00:07:08.786 00:07:08.786 real 0m2.743s 00:07:08.786 user 0m3.487s 00:07:08.786 sys 0m0.699s 00:07:08.786 23:07:31 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.786 23:07:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.786 ************************************ 00:07:08.786 END TEST rpc 00:07:08.786 ************************************ 00:07:08.786 23:07:31 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.786 23:07:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:08.786 23:07:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.786 23:07:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.786 23:07:31 -- common/autotest_common.sh@10 -- # set +x 00:07:08.786 ************************************ 00:07:08.786 START TEST skip_rpc 00:07:08.786 ************************************ 00:07:08.786 23:07:31 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:08.786 * Looking for test storage... 00:07:08.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:08.786 23:07:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:08.786 23:07:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:08.786 23:07:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:08.786 23:07:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.786 23:07:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.786 23:07:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.786 ************************************ 00:07:08.786 START TEST skip_rpc 00:07:08.786 ************************************ 00:07:08.786 23:07:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:07:08.786 23:07:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59098 00:07:08.786 23:07:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.786 23:07:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:08.786 23:07:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:09.046 [2024-07-24 23:07:31.293222] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:09.047 [2024-07-24 23:07:31.293336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59098 ] 00:07:09.047 [2024-07-24 23:07:31.433670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.306 [2024-07-24 23:07:31.536525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.306 [2024-07-24 23:07:31.591677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59098 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59098 ']' 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59098 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59098 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.576 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.577 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59098' 00:07:14.577 killing process with pid 59098 00:07:14.577 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59098 00:07:14.577 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59098 00:07:14.577 00:07:14.577 real 0m5.439s 00:07:14.577 user 0m5.069s 00:07:14.577 sys 0m0.281s 00:07:14.577 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.577 23:07:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 ************************************ 00:07:14.577 END TEST skip_rpc 00:07:14.577 ************************************ 00:07:14.577 23:07:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:14.577 23:07:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:14.577 23:07:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.577 23:07:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.577 23:07:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 ************************************ 00:07:14.577 START TEST skip_rpc_with_json 00:07:14.577 ************************************ 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59179 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59179 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59179 ']' 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.577 23:07:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 [2024-07-24 23:07:36.797301] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:14.577 [2024-07-24 23:07:36.797455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:07:14.577 [2024-07-24 23:07:36.941733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.577 [2024-07-24 23:07:37.059505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.835 [2024-07-24 23:07:37.115498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.400 [2024-07-24 23:07:37.768811] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:15.400 request: 00:07:15.400 { 00:07:15.400 "trtype": "tcp", 00:07:15.400 "method": "nvmf_get_transports", 00:07:15.400 "req_id": 1 00:07:15.400 } 00:07:15.400 Got JSON-RPC error response 00:07:15.400 response: 00:07:15.400 { 00:07:15.400 "code": -19, 00:07:15.400 "message": "No such device" 00:07:15.400 } 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.400 [2024-07-24 23:07:37.780897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.400 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:15.658 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.658 23:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:15.658 { 00:07:15.658 "subsystems": [ 00:07:15.658 { 00:07:15.658 "subsystem": "keyring", 00:07:15.658 "config": [] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "iobuf", 00:07:15.658 "config": [ 00:07:15.658 { 00:07:15.658 "method": "iobuf_set_options", 00:07:15.658 "params": { 00:07:15.658 "small_pool_count": 8192, 00:07:15.658 "large_pool_count": 1024, 00:07:15.658 "small_bufsize": 8192, 00:07:15.658 "large_bufsize": 135168 00:07:15.658 } 00:07:15.658 } 00:07:15.658 ] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "sock", 00:07:15.658 "config": [ 00:07:15.658 { 00:07:15.658 "method": "sock_set_default_impl", 00:07:15.658 "params": { 00:07:15.658 "impl_name": "uring" 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "sock_impl_set_options", 00:07:15.658 "params": { 00:07:15.658 "impl_name": "ssl", 00:07:15.658 "recv_buf_size": 4096, 00:07:15.658 "send_buf_size": 4096, 00:07:15.658 "enable_recv_pipe": true, 00:07:15.658 "enable_quickack": false, 00:07:15.658 "enable_placement_id": 0, 00:07:15.658 "enable_zerocopy_send_server": true, 00:07:15.658 "enable_zerocopy_send_client": false, 00:07:15.658 "zerocopy_threshold": 0, 00:07:15.658 "tls_version": 0, 00:07:15.658 "enable_ktls": false 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "sock_impl_set_options", 00:07:15.658 "params": { 00:07:15.658 "impl_name": "posix", 00:07:15.658 "recv_buf_size": 2097152, 00:07:15.658 "send_buf_size": 2097152, 00:07:15.658 "enable_recv_pipe": true, 00:07:15.658 "enable_quickack": false, 00:07:15.658 "enable_placement_id": 0, 00:07:15.658 "enable_zerocopy_send_server": true, 00:07:15.658 "enable_zerocopy_send_client": false, 00:07:15.658 "zerocopy_threshold": 0, 00:07:15.658 "tls_version": 0, 00:07:15.658 "enable_ktls": false 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "sock_impl_set_options", 00:07:15.658 "params": { 00:07:15.658 "impl_name": "uring", 00:07:15.658 "recv_buf_size": 2097152, 00:07:15.658 "send_buf_size": 2097152, 00:07:15.658 "enable_recv_pipe": true, 00:07:15.658 "enable_quickack": false, 00:07:15.658 "enable_placement_id": 0, 00:07:15.658 "enable_zerocopy_send_server": false, 00:07:15.658 "enable_zerocopy_send_client": false, 00:07:15.658 "zerocopy_threshold": 0, 00:07:15.658 "tls_version": 0, 00:07:15.658 "enable_ktls": false 00:07:15.658 } 00:07:15.658 } 00:07:15.658 ] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "vmd", 00:07:15.658 "config": [] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "accel", 00:07:15.658 "config": [ 00:07:15.658 { 00:07:15.658 "method": "accel_set_options", 00:07:15.658 "params": { 00:07:15.658 "small_cache_size": 128, 00:07:15.658 "large_cache_size": 16, 00:07:15.658 "task_count": 2048, 00:07:15.658 "sequence_count": 2048, 00:07:15.658 "buf_count": 2048 00:07:15.658 } 00:07:15.658 } 00:07:15.658 ] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "bdev", 00:07:15.658 "config": [ 00:07:15.658 { 00:07:15.658 "method": "bdev_set_options", 00:07:15.658 "params": { 00:07:15.658 "bdev_io_pool_size": 65535, 00:07:15.658 "bdev_io_cache_size": 256, 00:07:15.658 "bdev_auto_examine": true, 00:07:15.658 "iobuf_small_cache_size": 128, 00:07:15.658 "iobuf_large_cache_size": 16 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "bdev_raid_set_options", 00:07:15.658 "params": { 00:07:15.658 "process_window_size_kb": 1024 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "bdev_iscsi_set_options", 00:07:15.658 "params": { 00:07:15.658 "timeout_sec": 30 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "bdev_nvme_set_options", 00:07:15.658 "params": { 00:07:15.658 "action_on_timeout": "none", 00:07:15.658 "timeout_us": 0, 00:07:15.658 "timeout_admin_us": 0, 00:07:15.658 "keep_alive_timeout_ms": 10000, 00:07:15.658 "arbitration_burst": 0, 00:07:15.658 "low_priority_weight": 0, 00:07:15.658 "medium_priority_weight": 0, 00:07:15.658 "high_priority_weight": 0, 00:07:15.658 "nvme_adminq_poll_period_us": 10000, 00:07:15.658 "nvme_ioq_poll_period_us": 0, 00:07:15.658 "io_queue_requests": 0, 00:07:15.658 "delay_cmd_submit": true, 00:07:15.658 "transport_retry_count": 4, 00:07:15.658 "bdev_retry_count": 3, 00:07:15.658 "transport_ack_timeout": 0, 00:07:15.658 "ctrlr_loss_timeout_sec": 0, 00:07:15.658 "reconnect_delay_sec": 0, 00:07:15.658 "fast_io_fail_timeout_sec": 0, 00:07:15.658 "disable_auto_failback": false, 00:07:15.658 "generate_uuids": false, 00:07:15.658 "transport_tos": 0, 00:07:15.658 "nvme_error_stat": false, 00:07:15.658 "rdma_srq_size": 0, 00:07:15.658 "io_path_stat": false, 00:07:15.658 "allow_accel_sequence": false, 00:07:15.658 "rdma_max_cq_size": 0, 00:07:15.658 "rdma_cm_event_timeout_ms": 0, 00:07:15.658 "dhchap_digests": [ 00:07:15.658 "sha256", 00:07:15.658 "sha384", 00:07:15.658 "sha512" 00:07:15.658 ], 00:07:15.658 "dhchap_dhgroups": [ 00:07:15.658 "null", 00:07:15.658 "ffdhe2048", 00:07:15.658 "ffdhe3072", 00:07:15.658 "ffdhe4096", 00:07:15.658 "ffdhe6144", 00:07:15.658 "ffdhe8192" 00:07:15.658 ] 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "bdev_nvme_set_hotplug", 00:07:15.658 "params": { 00:07:15.658 "period_us": 100000, 00:07:15.658 "enable": false 00:07:15.658 } 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "method": "bdev_wait_for_examine" 00:07:15.658 } 00:07:15.658 ] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "scsi", 00:07:15.658 "config": null 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "scheduler", 00:07:15.658 "config": [ 00:07:15.658 { 00:07:15.658 "method": "framework_set_scheduler", 00:07:15.658 "params": { 00:07:15.658 "name": "static" 00:07:15.658 } 00:07:15.658 } 00:07:15.658 ] 00:07:15.658 }, 00:07:15.658 { 00:07:15.658 "subsystem": "vhost_scsi", 00:07:15.658 "config": [] 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "subsystem": "vhost_blk", 00:07:15.659 "config": [] 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "subsystem": "ublk", 00:07:15.659 "config": [] 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "subsystem": "nbd", 00:07:15.659 "config": [] 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "subsystem": "nvmf", 00:07:15.659 "config": [ 00:07:15.659 { 00:07:15.659 "method": "nvmf_set_config", 00:07:15.659 "params": { 00:07:15.659 "discovery_filter": "match_any", 00:07:15.659 "admin_cmd_passthru": { 00:07:15.659 "identify_ctrlr": false 00:07:15.659 } 00:07:15.659 } 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "method": "nvmf_set_max_subsystems", 00:07:15.659 "params": { 00:07:15.659 "max_subsystems": 1024 00:07:15.659 } 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "method": "nvmf_set_crdt", 00:07:15.659 "params": { 00:07:15.659 "crdt1": 0, 00:07:15.659 "crdt2": 0, 00:07:15.659 "crdt3": 0 00:07:15.659 } 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "method": "nvmf_create_transport", 00:07:15.659 "params": { 00:07:15.659 "trtype": "TCP", 00:07:15.659 "max_queue_depth": 128, 00:07:15.659 "max_io_qpairs_per_ctrlr": 127, 00:07:15.659 "in_capsule_data_size": 4096, 00:07:15.659 "max_io_size": 131072, 00:07:15.659 "io_unit_size": 131072, 00:07:15.659 "max_aq_depth": 128, 00:07:15.659 "num_shared_buffers": 511, 00:07:15.659 "buf_cache_size": 4294967295, 00:07:15.659 "dif_insert_or_strip": false, 00:07:15.659 "zcopy": false, 00:07:15.659 "c2h_success": true, 00:07:15.659 "sock_priority": 0, 00:07:15.659 "abort_timeout_sec": 1, 00:07:15.659 "ack_timeout": 0, 00:07:15.659 "data_wr_pool_size": 0 00:07:15.659 } 00:07:15.659 } 00:07:15.659 ] 00:07:15.659 }, 00:07:15.659 { 00:07:15.659 "subsystem": "iscsi", 00:07:15.659 "config": [ 00:07:15.659 { 00:07:15.659 "method": "iscsi_set_options", 00:07:15.659 "params": { 00:07:15.659 "node_base": "iqn.2016-06.io.spdk", 00:07:15.659 "max_sessions": 128, 00:07:15.659 "max_connections_per_session": 2, 00:07:15.659 "max_queue_depth": 64, 00:07:15.659 "default_time2wait": 2, 00:07:15.659 "default_time2retain": 20, 00:07:15.659 "first_burst_length": 8192, 00:07:15.659 "immediate_data": true, 00:07:15.659 "allow_duplicated_isid": false, 00:07:15.659 "error_recovery_level": 0, 00:07:15.659 "nop_timeout": 60, 00:07:15.659 "nop_in_interval": 30, 00:07:15.659 "disable_chap": false, 00:07:15.659 "require_chap": false, 00:07:15.659 "mutual_chap": false, 00:07:15.659 "chap_group": 0, 00:07:15.659 "max_large_datain_per_connection": 64, 00:07:15.659 "max_r2t_per_connection": 4, 00:07:15.659 "pdu_pool_size": 36864, 00:07:15.659 "immediate_data_pool_size": 16384, 00:07:15.659 "data_out_pool_size": 2048 00:07:15.659 } 00:07:15.659 } 00:07:15.659 ] 00:07:15.659 } 00:07:15.659 ] 00:07:15.659 } 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59179 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59179 ']' 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59179 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59179 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.659 killing process with pid 59179 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59179' 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59179 00:07:15.659 23:07:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59179 00:07:15.917 23:07:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:15.917 23:07:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59212 00:07:15.917 23:07:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59212 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59212 ']' 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59212 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59212 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.180 killing process with pid 59212 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59212' 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59212 00:07:21.180 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59212 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:21.437 00:07:21.437 real 0m7.058s 00:07:21.437 user 0m6.768s 00:07:21.437 sys 0m0.670s 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.437 ************************************ 00:07:21.437 END TEST skip_rpc_with_json 00:07:21.437 ************************************ 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.437 23:07:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:21.437 23:07:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:21.437 23:07:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.437 23:07:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.437 23:07:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.437 ************************************ 00:07:21.437 START TEST skip_rpc_with_delay 00:07:21.437 ************************************ 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:21.437 [2024-07-24 23:07:43.903125] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:21.437 [2024-07-24 23:07:43.903315] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:21.437 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.695 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.695 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.695 00:07:21.695 real 0m0.094s 00:07:21.695 user 0m0.054s 00:07:21.695 sys 0m0.039s 00:07:21.695 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.695 23:07:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:21.695 ************************************ 00:07:21.695 END TEST skip_rpc_with_delay 00:07:21.695 ************************************ 00:07:21.695 23:07:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:21.695 23:07:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:21.695 23:07:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:21.695 23:07:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:21.695 23:07:43 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.695 23:07:43 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.695 23:07:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.695 ************************************ 00:07:21.695 START TEST exit_on_failed_rpc_init 00:07:21.695 ************************************ 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59316 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59316 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59316 ']' 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.695 23:07:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.695 [2024-07-24 23:07:44.034758] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:21.695 [2024-07-24 23:07:44.034841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:07:21.695 [2024-07-24 23:07:44.167290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.953 [2024-07-24 23:07:44.272501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.953 [2024-07-24 23:07:44.326316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:22.887 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:22.887 [2024-07-24 23:07:45.070824] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:22.887 [2024-07-24 23:07:45.070920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59334 ] 00:07:22.887 [2024-07-24 23:07:45.207308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.887 [2024-07-24 23:07:45.314820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.887 [2024-07-24 23:07:45.314928] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:22.887 [2024-07-24 23:07:45.314947] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:22.887 [2024-07-24 23:07:45.314958] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59316 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59316 ']' 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59316 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59316 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.146 killing process with pid 59316 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59316' 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59316 00:07:23.146 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59316 00:07:23.404 00:07:23.404 real 0m1.833s 00:07:23.404 user 0m2.157s 00:07:23.404 sys 0m0.403s 00:07:23.404 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.404 23:07:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:23.404 ************************************ 00:07:23.404 END TEST exit_on_failed_rpc_init 00:07:23.404 ************************************ 00:07:23.404 23:07:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:23.404 23:07:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:23.404 00:07:23.404 real 0m14.721s 00:07:23.404 user 0m14.146s 00:07:23.404 sys 0m1.581s 00:07:23.404 23:07:45 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.404 23:07:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.404 ************************************ 00:07:23.404 END TEST skip_rpc 00:07:23.404 ************************************ 00:07:23.662 23:07:45 -- common/autotest_common.sh@1142 -- # return 0 00:07:23.662 23:07:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:23.662 23:07:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.662 23:07:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.662 23:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:23.662 ************************************ 00:07:23.662 START TEST rpc_client 00:07:23.662 ************************************ 00:07:23.662 23:07:45 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:23.662 * Looking for test storage... 00:07:23.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:23.662 23:07:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:23.662 OK 00:07:23.662 23:07:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:23.662 00:07:23.662 real 0m0.103s 00:07:23.662 user 0m0.050s 00:07:23.662 sys 0m0.059s 00:07:23.662 23:07:46 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.662 23:07:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:23.662 ************************************ 00:07:23.662 END TEST rpc_client 00:07:23.662 ************************************ 00:07:23.662 23:07:46 -- common/autotest_common.sh@1142 -- # return 0 00:07:23.662 23:07:46 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:23.662 23:07:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.662 23:07:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.662 23:07:46 -- common/autotest_common.sh@10 -- # set +x 00:07:23.662 ************************************ 00:07:23.662 START TEST json_config 00:07:23.663 ************************************ 00:07:23.663 23:07:46 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.663 23:07:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.663 23:07:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.663 23:07:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.663 23:07:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.663 23:07:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.663 23:07:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.663 23:07:46 json_config -- paths/export.sh@5 -- # export PATH 00:07:23.663 23:07:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@47 -- # : 0 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.663 23:07:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:23.663 23:07:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:23.921 23:07:46 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:23.921 23:07:46 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:23.921 INFO: JSON configuration test init 00:07:23.921 23:07:46 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:23.921 23:07:46 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.921 23:07:46 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.921 23:07:46 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:23.921 23:07:46 json_config -- json_config/common.sh@9 -- # local app=target 00:07:23.921 23:07:46 json_config -- json_config/common.sh@10 -- # shift 00:07:23.921 23:07:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:23.921 23:07:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:23.921 23:07:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:23.921 23:07:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.921 Waiting for target to run... 00:07:23.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.921 23:07:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:23.921 23:07:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59452 00:07:23.921 23:07:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:23.921 23:07:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:23.921 23:07:46 json_config -- json_config/common.sh@25 -- # waitforlisten 59452 /var/tmp/spdk_tgt.sock 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@829 -- # '[' -z 59452 ']' 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.921 23:07:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.921 [2024-07-24 23:07:46.224876] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:23.921 [2024-07-24 23:07:46.226017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:07:24.487 [2024-07-24 23:07:46.666895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.487 [2024-07-24 23:07:46.755431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.746 23:07:47 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.746 23:07:47 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:24.746 23:07:47 json_config -- json_config/common.sh@26 -- # echo '' 00:07:24.746 00:07:24.746 23:07:47 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:24.746 23:07:47 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:24.746 23:07:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.746 23:07:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.746 23:07:47 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:07:24.746 23:07:47 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:24.746 23:07:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.746 23:07:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.746 23:07:47 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:24.746 23:07:47 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:24.746 23:07:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:25.005 [2024-07-24 23:07:47.472067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.264 23:07:47 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:25.264 23:07:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:25.264 23:07:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.264 23:07:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.264 23:07:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:25.264 23:07:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:25.264 23:07:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:25.265 23:07:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:25.265 23:07:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:25.265 23:07:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@51 -- # sort 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:25.526 23:07:47 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:25.526 23:07:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:25.526 23:07:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:07:25.784 23:07:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.784 23:07:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:07:25.784 23:07:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:25.784 23:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:26.042 MallocForNvmf0 00:07:26.042 23:07:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:26.042 23:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:26.300 MallocForNvmf1 00:07:26.300 23:07:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:26.300 23:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:26.558 [2024-07-24 23:07:48.853037] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.558 23:07:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:26.558 23:07:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:26.816 23:07:49 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:26.816 23:07:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:27.075 23:07:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:27.075 23:07:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:27.333 23:07:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:27.333 23:07:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:27.333 [2024-07-24 23:07:49.809494] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:27.591 23:07:49 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:07:27.591 23:07:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.591 23:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 23:07:49 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:27.591 23:07:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.591 23:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 23:07:49 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:27.591 23:07:49 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:27.591 23:07:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:27.849 MallocBdevForConfigChangeCheck 00:07:27.849 23:07:50 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:27.849 23:07:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.849 23:07:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:27.849 23:07:50 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:27.849 23:07:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:28.415 INFO: shutting down applications... 00:07:28.415 23:07:50 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:28.415 23:07:50 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:28.415 23:07:50 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:28.415 23:07:50 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:28.415 23:07:50 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:28.673 Calling clear_iscsi_subsystem 00:07:28.673 Calling clear_nvmf_subsystem 00:07:28.673 Calling clear_nbd_subsystem 00:07:28.673 Calling clear_ublk_subsystem 00:07:28.674 Calling clear_vhost_blk_subsystem 00:07:28.674 Calling clear_vhost_scsi_subsystem 00:07:28.674 Calling clear_bdev_subsystem 00:07:28.674 23:07:51 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:28.674 23:07:51 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:28.674 23:07:51 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:28.674 23:07:51 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:28.674 23:07:51 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:28.674 23:07:51 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:29.241 23:07:51 json_config -- json_config/json_config.sh@349 -- # break 00:07:29.241 23:07:51 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:29.241 23:07:51 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:29.241 23:07:51 json_config -- json_config/common.sh@31 -- # local app=target 00:07:29.241 23:07:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:29.241 23:07:51 json_config -- json_config/common.sh@35 -- # [[ -n 59452 ]] 00:07:29.241 23:07:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59452 00:07:29.241 23:07:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:29.241 23:07:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.241 23:07:51 json_config -- json_config/common.sh@41 -- # kill -0 59452 00:07:29.241 23:07:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:29.499 23:07:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:29.499 23:07:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.499 23:07:51 json_config -- json_config/common.sh@41 -- # kill -0 59452 00:07:29.499 SPDK target shutdown done 00:07:29.499 INFO: relaunching applications... 00:07:29.499 23:07:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:29.499 23:07:51 json_config -- json_config/common.sh@43 -- # break 00:07:29.499 23:07:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:29.499 23:07:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:29.500 23:07:51 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:29.500 23:07:51 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:29.500 23:07:51 json_config -- json_config/common.sh@9 -- # local app=target 00:07:29.500 23:07:51 json_config -- json_config/common.sh@10 -- # shift 00:07:29.500 23:07:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:29.500 23:07:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:29.500 23:07:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:29.500 23:07:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.500 23:07:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.500 23:07:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59653 00:07:29.500 23:07:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:29.500 Waiting for target to run... 00:07:29.500 23:07:51 json_config -- json_config/common.sh@25 -- # waitforlisten 59653 /var/tmp/spdk_tgt.sock 00:07:29.500 23:07:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 59653 ']' 00:07:29.500 23:07:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:29.500 23:07:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:29.500 23:07:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:29.500 23:07:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:29.500 23:07:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.500 23:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.758 [2024-07-24 23:07:52.000802] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:29.758 [2024-07-24 23:07:52.000909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59653 ] 00:07:30.016 [2024-07-24 23:07:52.422804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.275 [2024-07-24 23:07:52.511356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.275 [2024-07-24 23:07:52.637398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.533 [2024-07-24 23:07:52.843893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.533 [2024-07-24 23:07:52.875980] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:30.533 00:07:30.533 INFO: Checking if target configuration is the same... 00:07:30.533 23:07:52 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.533 23:07:52 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:30.533 23:07:52 json_config -- json_config/common.sh@26 -- # echo '' 00:07:30.533 23:07:52 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:30.533 23:07:52 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:30.533 23:07:52 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:30.533 23:07:52 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:30.533 23:07:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.533 + '[' 2 -ne 2 ']' 00:07:30.533 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:30.533 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:30.533 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:30.533 +++ basename /dev/fd/62 00:07:30.533 ++ mktemp /tmp/62.XXX 00:07:30.533 + tmp_file_1=/tmp/62.maV 00:07:30.533 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:30.533 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:30.533 + tmp_file_2=/tmp/spdk_tgt_config.json.8UT 00:07:30.533 + ret=0 00:07:30.533 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:31.099 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:31.099 + diff -u /tmp/62.maV /tmp/spdk_tgt_config.json.8UT 00:07:31.099 INFO: JSON config files are the same 00:07:31.099 + echo 'INFO: JSON config files are the same' 00:07:31.099 + rm /tmp/62.maV /tmp/spdk_tgt_config.json.8UT 00:07:31.099 + exit 0 00:07:31.099 INFO: changing configuration and checking if this can be detected... 00:07:31.099 23:07:53 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:31.099 23:07:53 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:31.099 23:07:53 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:31.099 23:07:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:31.357 23:07:53 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:31.357 23:07:53 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:31.357 23:07:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:31.357 + '[' 2 -ne 2 ']' 00:07:31.357 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:31.357 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:31.357 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:31.357 +++ basename /dev/fd/62 00:07:31.357 ++ mktemp /tmp/62.XXX 00:07:31.357 + tmp_file_1=/tmp/62.Tw4 00:07:31.357 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:31.357 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:31.357 + tmp_file_2=/tmp/spdk_tgt_config.json.5kh 00:07:31.357 + ret=0 00:07:31.357 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:31.926 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:31.926 + diff -u /tmp/62.Tw4 /tmp/spdk_tgt_config.json.5kh 00:07:31.926 + ret=1 00:07:31.926 + echo '=== Start of file: /tmp/62.Tw4 ===' 00:07:31.926 + cat /tmp/62.Tw4 00:07:31.926 + echo '=== End of file: /tmp/62.Tw4 ===' 00:07:31.926 + echo '' 00:07:31.926 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5kh ===' 00:07:31.926 + cat /tmp/spdk_tgt_config.json.5kh 00:07:31.926 + echo '=== End of file: /tmp/spdk_tgt_config.json.5kh ===' 00:07:31.926 + echo '' 00:07:31.926 + rm /tmp/62.Tw4 /tmp/spdk_tgt_config.json.5kh 00:07:31.926 + exit 1 00:07:31.926 INFO: configuration change detected. 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@321 -- # [[ -n 59653 ]] 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:31.926 23:07:54 json_config -- json_config/json_config.sh@327 -- # killprocess 59653 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@948 -- # '[' -z 59653 ']' 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@952 -- # kill -0 59653 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@953 -- # uname 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59653 00:07:31.926 killing process with pid 59653 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59653' 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@967 -- # kill 59653 00:07:31.926 23:07:54 json_config -- common/autotest_common.sh@972 -- # wait 59653 00:07:32.185 23:07:54 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:32.185 23:07:54 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:32.185 23:07:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.185 23:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.185 INFO: Success 00:07:32.185 23:07:54 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:32.185 23:07:54 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:32.185 ************************************ 00:07:32.185 END TEST json_config 00:07:32.185 ************************************ 00:07:32.185 00:07:32.185 real 0m8.480s 00:07:32.185 user 0m12.129s 00:07:32.185 sys 0m1.800s 00:07:32.185 23:07:54 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.185 23:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.185 23:07:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:32.185 23:07:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:32.185 23:07:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.185 23:07:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.185 23:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:32.185 ************************************ 00:07:32.185 START TEST json_config_extra_key 00:07:32.185 ************************************ 00:07:32.185 23:07:54 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:32.185 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.185 23:07:54 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.185 23:07:54 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.185 23:07:54 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.185 23:07:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.185 23:07:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.185 23:07:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.185 23:07:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:32.185 23:07:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.185 23:07:54 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:32.186 23:07:54 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.186 23:07:54 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.186 23:07:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.186 23:07:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.186 23:07:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.465 23:07:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.465 23:07:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.465 23:07:54 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:32.465 INFO: launching applications... 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:32.465 23:07:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.465 Waiting for target to run... 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59799 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:32.465 23:07:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59799 /var/tmp/spdk_tgt.sock 00:07:32.465 23:07:54 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59799 ']' 00:07:32.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:32.465 23:07:54 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:32.465 23:07:54 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.465 23:07:54 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:32.465 23:07:54 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.465 23:07:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:32.465 [2024-07-24 23:07:54.758412] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:32.465 [2024-07-24 23:07:54.758572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59799 ] 00:07:32.731 [2024-07-24 23:07:55.189617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.989 [2024-07-24 23:07:55.288722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.989 [2024-07-24 23:07:55.310636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:33.555 00:07:33.555 INFO: shutting down applications... 00:07:33.555 23:07:55 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.555 23:07:55 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:33.555 23:07:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:33.555 23:07:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59799 ]] 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59799 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59799 00:07:33.555 23:07:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59799 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:33.814 23:07:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:33.814 SPDK target shutdown done 00:07:33.814 23:07:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:33.814 Success 00:07:33.814 00:07:33.814 real 0m1.697s 00:07:33.814 user 0m1.658s 00:07:33.814 sys 0m0.436s 00:07:33.814 23:07:56 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.814 23:07:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:33.814 ************************************ 00:07:33.814 END TEST json_config_extra_key 00:07:33.814 ************************************ 00:07:34.074 23:07:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:34.074 23:07:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:34.074 23:07:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.074 23:07:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.074 23:07:56 -- common/autotest_common.sh@10 -- # set +x 00:07:34.074 ************************************ 00:07:34.074 START TEST alias_rpc 00:07:34.074 ************************************ 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:34.074 * Looking for test storage... 00:07:34.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:34.074 23:07:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:34.074 23:07:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59858 00:07:34.074 23:07:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:34.074 23:07:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59858 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59858 ']' 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.074 23:07:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.074 [2024-07-24 23:07:56.487277] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:34.074 [2024-07-24 23:07:56.487360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59858 ] 00:07:34.333 [2024-07-24 23:07:56.622460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.333 [2024-07-24 23:07:56.744076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.333 [2024-07-24 23:07:56.798787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.268 23:07:57 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.268 23:07:57 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:35.268 23:07:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:35.527 23:07:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59858 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59858 ']' 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59858 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59858 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59858' 00:07:35.527 killing process with pid 59858 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@967 -- # kill 59858 00:07:35.527 23:07:57 alias_rpc -- common/autotest_common.sh@972 -- # wait 59858 00:07:35.785 00:07:35.785 real 0m1.874s 00:07:35.785 user 0m2.167s 00:07:35.785 sys 0m0.430s 00:07:35.785 ************************************ 00:07:35.785 END TEST alias_rpc 00:07:35.785 ************************************ 00:07:35.785 23:07:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.785 23:07:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.785 23:07:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:35.785 23:07:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:35.785 23:07:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:35.785 23:07:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.785 23:07:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.785 23:07:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.044 ************************************ 00:07:36.044 START TEST spdkcli_tcp 00:07:36.044 ************************************ 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:36.044 * Looking for test storage... 00:07:36.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59934 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59934 00:07:36.044 23:07:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59934 ']' 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.044 23:07:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.044 [2024-07-24 23:07:58.433261] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:36.044 [2024-07-24 23:07:58.434087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59934 ] 00:07:36.327 [2024-07-24 23:07:58.571925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.327 [2024-07-24 23:07:58.661248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.327 [2024-07-24 23:07:58.661259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.327 [2024-07-24 23:07:58.716305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.259 23:07:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.259 23:07:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:07:37.259 23:07:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59951 00:07:37.259 23:07:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:37.259 23:07:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:37.259 [ 00:07:37.259 "bdev_malloc_delete", 00:07:37.259 "bdev_malloc_create", 00:07:37.259 "bdev_null_resize", 00:07:37.259 "bdev_null_delete", 00:07:37.259 "bdev_null_create", 00:07:37.259 "bdev_nvme_cuse_unregister", 00:07:37.259 "bdev_nvme_cuse_register", 00:07:37.259 "bdev_opal_new_user", 00:07:37.259 "bdev_opal_set_lock_state", 00:07:37.259 "bdev_opal_delete", 00:07:37.259 "bdev_opal_get_info", 00:07:37.259 "bdev_opal_create", 00:07:37.259 "bdev_nvme_opal_revert", 00:07:37.259 "bdev_nvme_opal_init", 00:07:37.259 "bdev_nvme_send_cmd", 00:07:37.259 "bdev_nvme_get_path_iostat", 00:07:37.259 "bdev_nvme_get_mdns_discovery_info", 00:07:37.259 "bdev_nvme_stop_mdns_discovery", 00:07:37.259 "bdev_nvme_start_mdns_discovery", 00:07:37.259 "bdev_nvme_set_multipath_policy", 00:07:37.259 "bdev_nvme_set_preferred_path", 00:07:37.259 "bdev_nvme_get_io_paths", 00:07:37.259 "bdev_nvme_remove_error_injection", 00:07:37.259 "bdev_nvme_add_error_injection", 00:07:37.259 "bdev_nvme_get_discovery_info", 00:07:37.259 "bdev_nvme_stop_discovery", 00:07:37.259 "bdev_nvme_start_discovery", 00:07:37.259 "bdev_nvme_get_controller_health_info", 00:07:37.259 "bdev_nvme_disable_controller", 00:07:37.259 "bdev_nvme_enable_controller", 00:07:37.259 "bdev_nvme_reset_controller", 00:07:37.259 "bdev_nvme_get_transport_statistics", 00:07:37.259 "bdev_nvme_apply_firmware", 00:07:37.259 "bdev_nvme_detach_controller", 00:07:37.259 "bdev_nvme_get_controllers", 00:07:37.260 "bdev_nvme_attach_controller", 00:07:37.260 "bdev_nvme_set_hotplug", 00:07:37.260 "bdev_nvme_set_options", 00:07:37.260 "bdev_passthru_delete", 00:07:37.260 "bdev_passthru_create", 00:07:37.260 "bdev_lvol_set_parent_bdev", 00:07:37.260 "bdev_lvol_set_parent", 00:07:37.260 "bdev_lvol_check_shallow_copy", 00:07:37.260 "bdev_lvol_start_shallow_copy", 00:07:37.260 "bdev_lvol_grow_lvstore", 00:07:37.260 "bdev_lvol_get_lvols", 00:07:37.260 "bdev_lvol_get_lvstores", 00:07:37.260 "bdev_lvol_delete", 00:07:37.260 "bdev_lvol_set_read_only", 00:07:37.260 "bdev_lvol_resize", 00:07:37.260 "bdev_lvol_decouple_parent", 00:07:37.260 "bdev_lvol_inflate", 00:07:37.260 "bdev_lvol_rename", 00:07:37.260 "bdev_lvol_clone_bdev", 00:07:37.260 "bdev_lvol_clone", 00:07:37.260 "bdev_lvol_snapshot", 00:07:37.260 "bdev_lvol_create", 00:07:37.260 "bdev_lvol_delete_lvstore", 00:07:37.260 "bdev_lvol_rename_lvstore", 00:07:37.260 "bdev_lvol_create_lvstore", 00:07:37.260 "bdev_raid_set_options", 00:07:37.260 "bdev_raid_remove_base_bdev", 00:07:37.260 "bdev_raid_add_base_bdev", 00:07:37.260 "bdev_raid_delete", 00:07:37.260 "bdev_raid_create", 00:07:37.260 "bdev_raid_get_bdevs", 00:07:37.260 "bdev_error_inject_error", 00:07:37.260 "bdev_error_delete", 00:07:37.260 "bdev_error_create", 00:07:37.260 "bdev_split_delete", 00:07:37.260 "bdev_split_create", 00:07:37.260 "bdev_delay_delete", 00:07:37.260 "bdev_delay_create", 00:07:37.260 "bdev_delay_update_latency", 00:07:37.260 "bdev_zone_block_delete", 00:07:37.260 "bdev_zone_block_create", 00:07:37.260 "blobfs_create", 00:07:37.260 "blobfs_detect", 00:07:37.260 "blobfs_set_cache_size", 00:07:37.260 "bdev_aio_delete", 00:07:37.260 "bdev_aio_rescan", 00:07:37.260 "bdev_aio_create", 00:07:37.260 "bdev_ftl_set_property", 00:07:37.260 "bdev_ftl_get_properties", 00:07:37.260 "bdev_ftl_get_stats", 00:07:37.260 "bdev_ftl_unmap", 00:07:37.260 "bdev_ftl_unload", 00:07:37.260 "bdev_ftl_delete", 00:07:37.260 "bdev_ftl_load", 00:07:37.260 "bdev_ftl_create", 00:07:37.260 "bdev_virtio_attach_controller", 00:07:37.260 "bdev_virtio_scsi_get_devices", 00:07:37.260 "bdev_virtio_detach_controller", 00:07:37.260 "bdev_virtio_blk_set_hotplug", 00:07:37.260 "bdev_iscsi_delete", 00:07:37.260 "bdev_iscsi_create", 00:07:37.260 "bdev_iscsi_set_options", 00:07:37.260 "bdev_uring_delete", 00:07:37.260 "bdev_uring_rescan", 00:07:37.260 "bdev_uring_create", 00:07:37.260 "accel_error_inject_error", 00:07:37.260 "ioat_scan_accel_module", 00:07:37.260 "dsa_scan_accel_module", 00:07:37.260 "iaa_scan_accel_module", 00:07:37.260 "keyring_file_remove_key", 00:07:37.260 "keyring_file_add_key", 00:07:37.260 "keyring_linux_set_options", 00:07:37.260 "iscsi_get_histogram", 00:07:37.260 "iscsi_enable_histogram", 00:07:37.260 "iscsi_set_options", 00:07:37.260 "iscsi_get_auth_groups", 00:07:37.260 "iscsi_auth_group_remove_secret", 00:07:37.260 "iscsi_auth_group_add_secret", 00:07:37.260 "iscsi_delete_auth_group", 00:07:37.260 "iscsi_create_auth_group", 00:07:37.260 "iscsi_set_discovery_auth", 00:07:37.260 "iscsi_get_options", 00:07:37.260 "iscsi_target_node_request_logout", 00:07:37.260 "iscsi_target_node_set_redirect", 00:07:37.260 "iscsi_target_node_set_auth", 00:07:37.260 "iscsi_target_node_add_lun", 00:07:37.260 "iscsi_get_stats", 00:07:37.260 "iscsi_get_connections", 00:07:37.260 "iscsi_portal_group_set_auth", 00:07:37.260 "iscsi_start_portal_group", 00:07:37.260 "iscsi_delete_portal_group", 00:07:37.260 "iscsi_create_portal_group", 00:07:37.260 "iscsi_get_portal_groups", 00:07:37.260 "iscsi_delete_target_node", 00:07:37.260 "iscsi_target_node_remove_pg_ig_maps", 00:07:37.260 "iscsi_target_node_add_pg_ig_maps", 00:07:37.260 "iscsi_create_target_node", 00:07:37.260 "iscsi_get_target_nodes", 00:07:37.260 "iscsi_delete_initiator_group", 00:07:37.260 "iscsi_initiator_group_remove_initiators", 00:07:37.260 "iscsi_initiator_group_add_initiators", 00:07:37.260 "iscsi_create_initiator_group", 00:07:37.260 "iscsi_get_initiator_groups", 00:07:37.260 "nvmf_set_crdt", 00:07:37.260 "nvmf_set_config", 00:07:37.260 "nvmf_set_max_subsystems", 00:07:37.260 "nvmf_stop_mdns_prr", 00:07:37.260 "nvmf_publish_mdns_prr", 00:07:37.260 "nvmf_subsystem_get_listeners", 00:07:37.260 "nvmf_subsystem_get_qpairs", 00:07:37.260 "nvmf_subsystem_get_controllers", 00:07:37.260 "nvmf_get_stats", 00:07:37.260 "nvmf_get_transports", 00:07:37.260 "nvmf_create_transport", 00:07:37.260 "nvmf_get_targets", 00:07:37.260 "nvmf_delete_target", 00:07:37.260 "nvmf_create_target", 00:07:37.260 "nvmf_subsystem_allow_any_host", 00:07:37.260 "nvmf_subsystem_remove_host", 00:07:37.260 "nvmf_subsystem_add_host", 00:07:37.260 "nvmf_ns_remove_host", 00:07:37.260 "nvmf_ns_add_host", 00:07:37.260 "nvmf_subsystem_remove_ns", 00:07:37.260 "nvmf_subsystem_add_ns", 00:07:37.260 "nvmf_subsystem_listener_set_ana_state", 00:07:37.260 "nvmf_discovery_get_referrals", 00:07:37.260 "nvmf_discovery_remove_referral", 00:07:37.260 "nvmf_discovery_add_referral", 00:07:37.260 "nvmf_subsystem_remove_listener", 00:07:37.260 "nvmf_subsystem_add_listener", 00:07:37.260 "nvmf_delete_subsystem", 00:07:37.260 "nvmf_create_subsystem", 00:07:37.260 "nvmf_get_subsystems", 00:07:37.260 "env_dpdk_get_mem_stats", 00:07:37.260 "nbd_get_disks", 00:07:37.260 "nbd_stop_disk", 00:07:37.260 "nbd_start_disk", 00:07:37.260 "ublk_recover_disk", 00:07:37.260 "ublk_get_disks", 00:07:37.260 "ublk_stop_disk", 00:07:37.260 "ublk_start_disk", 00:07:37.260 "ublk_destroy_target", 00:07:37.260 "ublk_create_target", 00:07:37.260 "virtio_blk_create_transport", 00:07:37.260 "virtio_blk_get_transports", 00:07:37.260 "vhost_controller_set_coalescing", 00:07:37.260 "vhost_get_controllers", 00:07:37.260 "vhost_delete_controller", 00:07:37.260 "vhost_create_blk_controller", 00:07:37.260 "vhost_scsi_controller_remove_target", 00:07:37.260 "vhost_scsi_controller_add_target", 00:07:37.260 "vhost_start_scsi_controller", 00:07:37.260 "vhost_create_scsi_controller", 00:07:37.260 "thread_set_cpumask", 00:07:37.260 "framework_get_governor", 00:07:37.260 "framework_get_scheduler", 00:07:37.260 "framework_set_scheduler", 00:07:37.260 "framework_get_reactors", 00:07:37.260 "thread_get_io_channels", 00:07:37.260 "thread_get_pollers", 00:07:37.260 "thread_get_stats", 00:07:37.260 "framework_monitor_context_switch", 00:07:37.260 "spdk_kill_instance", 00:07:37.260 "log_enable_timestamps", 00:07:37.260 "log_get_flags", 00:07:37.260 "log_clear_flag", 00:07:37.260 "log_set_flag", 00:07:37.260 "log_get_level", 00:07:37.260 "log_set_level", 00:07:37.260 "log_get_print_level", 00:07:37.260 "log_set_print_level", 00:07:37.260 "framework_enable_cpumask_locks", 00:07:37.260 "framework_disable_cpumask_locks", 00:07:37.260 "framework_wait_init", 00:07:37.260 "framework_start_init", 00:07:37.260 "scsi_get_devices", 00:07:37.260 "bdev_get_histogram", 00:07:37.260 "bdev_enable_histogram", 00:07:37.260 "bdev_set_qos_limit", 00:07:37.260 "bdev_set_qd_sampling_period", 00:07:37.260 "bdev_get_bdevs", 00:07:37.260 "bdev_reset_iostat", 00:07:37.260 "bdev_get_iostat", 00:07:37.260 "bdev_examine", 00:07:37.260 "bdev_wait_for_examine", 00:07:37.260 "bdev_set_options", 00:07:37.260 "notify_get_notifications", 00:07:37.260 "notify_get_types", 00:07:37.260 "accel_get_stats", 00:07:37.260 "accel_set_options", 00:07:37.260 "accel_set_driver", 00:07:37.260 "accel_crypto_key_destroy", 00:07:37.260 "accel_crypto_keys_get", 00:07:37.260 "accel_crypto_key_create", 00:07:37.260 "accel_assign_opc", 00:07:37.260 "accel_get_module_info", 00:07:37.260 "accel_get_opc_assignments", 00:07:37.260 "vmd_rescan", 00:07:37.260 "vmd_remove_device", 00:07:37.260 "vmd_enable", 00:07:37.260 "sock_get_default_impl", 00:07:37.260 "sock_set_default_impl", 00:07:37.260 "sock_impl_set_options", 00:07:37.261 "sock_impl_get_options", 00:07:37.261 "iobuf_get_stats", 00:07:37.261 "iobuf_set_options", 00:07:37.261 "framework_get_pci_devices", 00:07:37.261 "framework_get_config", 00:07:37.261 "framework_get_subsystems", 00:07:37.261 "trace_get_info", 00:07:37.261 "trace_get_tpoint_group_mask", 00:07:37.261 "trace_disable_tpoint_group", 00:07:37.261 "trace_enable_tpoint_group", 00:07:37.261 "trace_clear_tpoint_mask", 00:07:37.261 "trace_set_tpoint_mask", 00:07:37.261 "keyring_get_keys", 00:07:37.261 "spdk_get_version", 00:07:37.261 "rpc_get_methods" 00:07:37.261 ] 00:07:37.519 23:07:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:37.519 23:07:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:37.519 23:07:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59934 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59934 ']' 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59934 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59934 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.519 killing process with pid 59934 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59934' 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59934 00:07:37.519 23:07:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59934 00:07:37.777 ************************************ 00:07:37.777 END TEST spdkcli_tcp 00:07:37.777 ************************************ 00:07:37.777 00:07:37.777 real 0m1.970s 00:07:37.777 user 0m3.753s 00:07:37.777 sys 0m0.506s 00:07:37.777 23:08:00 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.777 23:08:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:38.036 23:08:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.036 23:08:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:38.036 23:08:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.036 23:08:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.036 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:38.036 ************************************ 00:07:38.036 START TEST dpdk_mem_utility 00:07:38.036 ************************************ 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:38.036 * Looking for test storage... 00:07:38.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:38.036 23:08:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:38.036 23:08:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60025 00:07:38.036 23:08:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60025 00:07:38.036 23:08:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60025 ']' 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.036 23:08:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:38.036 [2024-07-24 23:08:00.452435] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:38.036 [2024-07-24 23:08:00.452838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:07:38.295 [2024-07-24 23:08:00.591307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.295 [2024-07-24 23:08:00.711040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.295 [2024-07-24 23:08:00.765113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:39.231 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.231 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:07:39.231 23:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:39.231 23:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:39.231 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.231 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:39.231 { 00:07:39.231 "filename": "/tmp/spdk_mem_dump.txt" 00:07:39.231 } 00:07:39.231 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.231 23:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:39.231 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:39.231 1 heaps totaling size 814.000000 MiB 00:07:39.231 size: 814.000000 MiB heap id: 0 00:07:39.231 end heaps---------- 00:07:39.231 8 mempools totaling size 598.116089 MiB 00:07:39.231 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:39.231 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:39.231 size: 84.521057 MiB name: bdev_io_60025 00:07:39.231 size: 51.011292 MiB name: evtpool_60025 00:07:39.231 size: 50.003479 MiB name: msgpool_60025 00:07:39.231 size: 21.763794 MiB name: PDU_Pool 00:07:39.231 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:39.231 size: 0.026123 MiB name: Session_Pool 00:07:39.231 end mempools------- 00:07:39.231 6 memzones totaling size 4.142822 MiB 00:07:39.231 size: 1.000366 MiB name: RG_ring_0_60025 00:07:39.231 size: 1.000366 MiB name: RG_ring_1_60025 00:07:39.231 size: 1.000366 MiB name: RG_ring_4_60025 00:07:39.231 size: 1.000366 MiB name: RG_ring_5_60025 00:07:39.231 size: 0.125366 MiB name: RG_ring_2_60025 00:07:39.231 size: 0.015991 MiB name: RG_ring_3_60025 00:07:39.231 end memzones------- 00:07:39.231 23:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:39.231 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:07:39.231 list of free elements. size: 12.471558 MiB 00:07:39.231 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:39.231 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:39.231 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:39.231 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:39.231 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:39.231 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:39.231 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:39.231 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:39.231 element at address: 0x200000200000 with size: 0.833191 MiB 00:07:39.231 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:07:39.231 element at address: 0x20000b200000 with size: 0.488892 MiB 00:07:39.231 element at address: 0x200000800000 with size: 0.486328 MiB 00:07:39.231 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:39.231 element at address: 0x200027e00000 with size: 0.395935 MiB 00:07:39.231 element at address: 0x200003a00000 with size: 0.347839 MiB 00:07:39.231 list of standard malloc elements. size: 199.265869 MiB 00:07:39.231 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:39.231 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:39.231 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:39.231 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:39.231 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:39.231 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:39.231 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:39.231 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:39.231 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:39.231 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087c800 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59180 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59240 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59300 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59480 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59540 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59600 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59780 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59840 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59900 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:07:39.231 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:39.232 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e65680 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:39.232 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:39.233 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:39.233 list of memzone associated elements. size: 602.262573 MiB 00:07:39.233 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:39.233 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:39.233 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:39.233 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:39.233 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:39.233 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60025_0 00:07:39.233 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:39.233 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60025_0 00:07:39.233 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:39.233 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60025_0 00:07:39.233 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:39.233 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:39.233 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:39.233 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:39.233 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:39.233 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60025 00:07:39.233 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:39.233 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60025 00:07:39.233 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:39.233 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60025 00:07:39.233 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:39.233 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:39.233 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:39.233 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:39.233 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:39.233 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:39.233 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:39.233 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:39.233 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:39.233 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60025 00:07:39.233 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:39.233 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60025 00:07:39.233 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:39.233 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60025 00:07:39.233 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:39.233 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60025 00:07:39.233 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:39.233 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60025 00:07:39.233 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:39.233 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:39.233 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:39.233 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:39.233 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:39.233 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:39.233 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:39.233 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60025 00:07:39.233 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:39.233 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:39.233 element at address: 0x200027e65740 with size: 0.023743 MiB 00:07:39.233 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:39.233 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:39.233 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60025 00:07:39.233 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:07:39.233 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:39.233 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:39.233 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60025 00:07:39.233 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:39.233 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60025 00:07:39.233 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:07:39.233 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:39.233 23:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:39.233 23:08:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60025 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60025 ']' 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60025 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60025 00:07:39.233 killing process with pid 60025 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60025' 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60025 00:07:39.233 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60025 00:07:39.799 00:07:39.799 real 0m1.682s 00:07:39.799 user 0m1.824s 00:07:39.799 sys 0m0.429s 00:07:39.799 ************************************ 00:07:39.799 END TEST dpdk_mem_utility 00:07:39.799 ************************************ 00:07:39.800 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.800 23:08:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 23:08:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.800 23:08:02 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:39.800 23:08:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.800 23:08:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.800 23:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 ************************************ 00:07:39.800 START TEST event 00:07:39.800 ************************************ 00:07:39.800 23:08:02 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:39.800 * Looking for test storage... 00:07:39.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:39.800 23:08:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:39.800 23:08:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:39.800 23:08:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:39.800 23:08:02 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:39.800 23:08:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.800 23:08:02 event -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 ************************************ 00:07:39.800 START TEST event_perf 00:07:39.800 ************************************ 00:07:39.800 23:08:02 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:39.800 Running I/O for 1 seconds...[2024-07-24 23:08:02.156226] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:39.800 [2024-07-24 23:08:02.156323] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60102 ] 00:07:40.058 [2024-07-24 23:08:02.296365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.058 [2024-07-24 23:08:02.414678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.058 [2024-07-24 23:08:02.414823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.058 [2024-07-24 23:08:02.414926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.058 [2024-07-24 23:08:02.414929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.014 Running I/O for 1 seconds... 00:07:41.014 lcore 0: 118522 00:07:41.014 lcore 1: 118520 00:07:41.014 lcore 2: 118518 00:07:41.014 lcore 3: 118519 00:07:41.014 done. 00:07:41.272 00:07:41.272 real 0m1.368s 00:07:41.272 user 0m4.168s 00:07:41.272 sys 0m0.073s 00:07:41.272 23:08:03 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.272 23:08:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:41.272 ************************************ 00:07:41.272 END TEST event_perf 00:07:41.272 ************************************ 00:07:41.272 23:08:03 event -- common/autotest_common.sh@1142 -- # return 0 00:07:41.273 23:08:03 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:41.273 23:08:03 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:41.273 23:08:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.273 23:08:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:41.273 ************************************ 00:07:41.273 START TEST event_reactor 00:07:41.273 ************************************ 00:07:41.273 23:08:03 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:41.273 [2024-07-24 23:08:03.575338] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:41.273 [2024-07-24 23:08:03.575430] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60135 ] 00:07:41.273 [2024-07-24 23:08:03.712232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.530 [2024-07-24 23:08:03.815021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.464 test_start 00:07:42.464 oneshot 00:07:42.464 tick 100 00:07:42.464 tick 100 00:07:42.464 tick 250 00:07:42.464 tick 100 00:07:42.464 tick 100 00:07:42.464 tick 100 00:07:42.464 tick 250 00:07:42.464 tick 500 00:07:42.464 tick 100 00:07:42.464 tick 100 00:07:42.464 tick 250 00:07:42.464 tick 100 00:07:42.464 tick 100 00:07:42.464 test_end 00:07:42.464 ************************************ 00:07:42.464 END TEST event_reactor 00:07:42.464 ************************************ 00:07:42.464 00:07:42.464 real 0m1.346s 00:07:42.464 user 0m1.184s 00:07:42.464 sys 0m0.056s 00:07:42.464 23:08:04 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.464 23:08:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:42.464 23:08:04 event -- common/autotest_common.sh@1142 -- # return 0 00:07:42.464 23:08:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:42.464 23:08:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:42.464 23:08:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.464 23:08:04 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.722 ************************************ 00:07:42.722 START TEST event_reactor_perf 00:07:42.722 ************************************ 00:07:42.722 23:08:04 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:42.722 [2024-07-24 23:08:04.970898] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:42.722 [2024-07-24 23:08:04.971010] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:07:42.722 [2024-07-24 23:08:05.106511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.980 [2024-07-24 23:08:05.215886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.914 test_start 00:07:43.914 test_end 00:07:43.914 Performance: 376736 events per second 00:07:43.914 00:07:43.914 real 0m1.346s 00:07:43.914 user 0m1.188s 00:07:43.914 sys 0m0.052s 00:07:43.914 ************************************ 00:07:43.914 END TEST event_reactor_perf 00:07:43.914 ************************************ 00:07:43.914 23:08:06 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.914 23:08:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.914 23:08:06 event -- common/autotest_common.sh@1142 -- # return 0 00:07:43.914 23:08:06 event -- event/event.sh@49 -- # uname -s 00:07:43.914 23:08:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:43.914 23:08:06 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:43.914 23:08:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:43.914 23:08:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.914 23:08:06 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.914 ************************************ 00:07:43.914 START TEST event_scheduler 00:07:43.914 ************************************ 00:07:43.914 23:08:06 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:44.172 * Looking for test storage... 00:07:44.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:44.172 23:08:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:44.172 23:08:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60232 00:07:44.172 23:08:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:44.172 23:08:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:44.172 23:08:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60232 00:07:44.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.172 23:08:06 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60232 ']' 00:07:44.172 23:08:06 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.172 23:08:06 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.172 23:08:06 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.172 23:08:06 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.172 23:08:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:44.172 [2024-07-24 23:08:06.499643] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:44.172 [2024-07-24 23:08:06.500717] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60232 ] 00:07:44.172 [2024-07-24 23:08:06.642468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.431 [2024-07-24 23:08:06.780220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.431 [2024-07-24 23:08:06.780377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.431 [2024-07-24 23:08:06.780440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.431 [2024-07-24 23:08:06.780435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:45.366 23:08:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:45.366 POWER: Cannot set governor of lcore 0 to userspace 00:07:45.366 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:45.366 POWER: Cannot set governor of lcore 0 to performance 00:07:45.366 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:45.366 POWER: Cannot set governor of lcore 0 to userspace 00:07:45.366 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:45.366 POWER: Cannot set governor of lcore 0 to userspace 00:07:45.366 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:45.366 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:45.366 POWER: Unable to set Power Management Environment for lcore 0 00:07:45.366 [2024-07-24 23:08:07.498967] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:45.366 [2024-07-24 23:08:07.499012] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:45.366 [2024-07-24 23:08:07.499103] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:45.366 [2024-07-24 23:08:07.499221] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:45.366 [2024-07-24 23:08:07.499314] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:45.366 [2024-07-24 23:08:07.499356] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 [2024-07-24 23:08:07.561239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.366 [2024-07-24 23:08:07.598890] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 ************************************ 00:07:45.366 START TEST scheduler_create_thread 00:07:45.366 ************************************ 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 2 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 3 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 4 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 5 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 6 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 7 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 8 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 9 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 10 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.366 23:08:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:46.740 23:08:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.740 23:08:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:46.740 23:08:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:46.740 23:08:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.740 23:08:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.112 ************************************ 00:07:48.112 END TEST scheduler_create_thread 00:07:48.112 ************************************ 00:07:48.112 23:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:48.112 00:07:48.112 real 0m2.612s 00:07:48.112 user 0m0.019s 00:07:48.112 sys 0m0.004s 00:07:48.112 23:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.112 23:08:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:48.112 23:08:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:48.112 23:08:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60232 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60232 ']' 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60232 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60232 00:07:48.112 killing process with pid 60232 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60232' 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60232 00:07:48.112 23:08:10 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60232 00:07:48.411 [2024-07-24 23:08:10.702817] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:48.669 ************************************ 00:07:48.669 END TEST event_scheduler 00:07:48.669 ************************************ 00:07:48.669 00:07:48.669 real 0m4.584s 00:07:48.669 user 0m8.592s 00:07:48.669 sys 0m0.372s 00:07:48.669 23:08:10 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.669 23:08:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.669 23:08:10 event -- common/autotest_common.sh@1142 -- # return 0 00:07:48.669 23:08:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:48.669 23:08:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:48.669 23:08:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.669 23:08:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.669 23:08:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.669 ************************************ 00:07:48.669 START TEST app_repeat 00:07:48.669 ************************************ 00:07:48.669 23:08:10 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:48.669 23:08:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:48.669 Process app_repeat pid: 60332 00:07:48.669 spdk_app_start Round 0 00:07:48.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60332 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60332' 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:48.669 23:08:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60332 /var/tmp/spdk-nbd.sock 00:07:48.669 23:08:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60332 ']' 00:07:48.669 23:08:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:48.669 23:08:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:48.669 23:08:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:48.669 23:08:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:48.669 23:08:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.669 [2024-07-24 23:08:11.027853] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:07:48.669 [2024-07-24 23:08:11.027941] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:07:48.927 [2024-07-24 23:08:11.158187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.927 [2024-07-24 23:08:11.247452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.927 [2024-07-24 23:08:11.247462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.927 [2024-07-24 23:08:11.304369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.927 23:08:11 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:48.927 23:08:11 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:48.927 23:08:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:49.185 Malloc0 00:07:49.185 23:08:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:49.750 Malloc1 00:07:49.750 23:08:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:49.750 23:08:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:49.750 /dev/nbd0 00:07:49.750 23:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:50.008 23:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.008 1+0 records in 00:07:50.008 1+0 records out 00:07:50.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221677 s, 18.5 MB/s 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.008 23:08:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:50.008 23:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.008 23:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.008 23:08:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:50.266 /dev/nbd1 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:50.266 1+0 records in 00:07:50.266 1+0 records out 00:07:50.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032365 s, 12.7 MB/s 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:50.266 23:08:12 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.266 23:08:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:50.525 { 00:07:50.525 "nbd_device": "/dev/nbd0", 00:07:50.525 "bdev_name": "Malloc0" 00:07:50.525 }, 00:07:50.525 { 00:07:50.525 "nbd_device": "/dev/nbd1", 00:07:50.525 "bdev_name": "Malloc1" 00:07:50.525 } 00:07:50.525 ]' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:50.525 { 00:07:50.525 "nbd_device": "/dev/nbd0", 00:07:50.525 "bdev_name": "Malloc0" 00:07:50.525 }, 00:07:50.525 { 00:07:50.525 "nbd_device": "/dev/nbd1", 00:07:50.525 "bdev_name": "Malloc1" 00:07:50.525 } 00:07:50.525 ]' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:50.525 /dev/nbd1' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:50.525 /dev/nbd1' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:50.525 256+0 records in 00:07:50.525 256+0 records out 00:07:50.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00769907 s, 136 MB/s 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:50.525 256+0 records in 00:07:50.525 256+0 records out 00:07:50.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267202 s, 39.2 MB/s 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:50.525 256+0 records in 00:07:50.525 256+0 records out 00:07:50.525 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256949 s, 40.8 MB/s 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.525 23:08:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.784 23:08:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.043 23:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.301 23:08:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.301 23:08:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:51.890 23:08:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:51.890 [2024-07-24 23:08:14.304204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.169 [2024-07-24 23:08:14.412514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.169 [2024-07-24 23:08:14.412524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.169 [2024-07-24 23:08:14.467657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.169 [2024-07-24 23:08:14.467748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:52.169 [2024-07-24 23:08:14.467762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:54.697 23:08:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:54.697 spdk_app_start Round 1 00:07:54.697 23:08:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:54.697 23:08:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60332 /var/tmp/spdk-nbd.sock 00:07:54.697 23:08:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60332 ']' 00:07:54.697 23:08:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:54.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:54.697 23:08:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.697 23:08:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:54.697 23:08:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.697 23:08:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:54.956 23:08:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.956 23:08:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:54.956 23:08:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.214 Malloc0 00:07:55.214 23:08:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.473 Malloc1 00:07:55.473 23:08:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.473 23:08:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:56.040 /dev/nbd0 00:07:56.040 23:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.040 23:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:56.040 1+0 records in 00:07:56.040 1+0 records out 00:07:56.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026221 s, 15.6 MB/s 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.040 23:08:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:56.040 23:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.040 23:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.040 23:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:56.299 /dev/nbd1 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:56.299 1+0 records in 00:07:56.299 1+0 records out 00:07:56.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367234 s, 11.2 MB/s 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:56.299 23:08:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.299 23:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:56.558 { 00:07:56.558 "nbd_device": "/dev/nbd0", 00:07:56.558 "bdev_name": "Malloc0" 00:07:56.558 }, 00:07:56.558 { 00:07:56.558 "nbd_device": "/dev/nbd1", 00:07:56.558 "bdev_name": "Malloc1" 00:07:56.558 } 00:07:56.558 ]' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:56.558 { 00:07:56.558 "nbd_device": "/dev/nbd0", 00:07:56.558 "bdev_name": "Malloc0" 00:07:56.558 }, 00:07:56.558 { 00:07:56.558 "nbd_device": "/dev/nbd1", 00:07:56.558 "bdev_name": "Malloc1" 00:07:56.558 } 00:07:56.558 ]' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:56.558 /dev/nbd1' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:56.558 /dev/nbd1' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:56.558 256+0 records in 00:07:56.558 256+0 records out 00:07:56.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00835002 s, 126 MB/s 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.558 23:08:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:56.558 256+0 records in 00:07:56.558 256+0 records out 00:07:56.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303227 s, 34.6 MB/s 00:07:56.558 23:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.558 23:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:56.816 256+0 records in 00:07:56.816 256+0 records out 00:07:56.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265862 s, 39.4 MB/s 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:56.816 23:08:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:56.817 23:08:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.817 23:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.817 23:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.817 23:08:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:56.817 23:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.817 23:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.074 23:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.332 23:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:57.589 23:08:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:57.589 23:08:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:58.155 23:08:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:58.155 [2024-07-24 23:08:20.624092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:58.414 [2024-07-24 23:08:20.747205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.414 [2024-07-24 23:08:20.747215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.414 [2024-07-24 23:08:20.804858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.414 [2024-07-24 23:08:20.804978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:58.414 [2024-07-24 23:08:20.804993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:00.946 spdk_app_start Round 2 00:08:00.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.946 23:08:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.946 23:08:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:00.946 23:08:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60332 /var/tmp/spdk-nbd.sock 00:08:00.946 23:08:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60332 ']' 00:08:00.946 23:08:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.946 23:08:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.946 23:08:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.946 23:08:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.946 23:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:01.204 23:08:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.204 23:08:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:01.204 23:08:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.462 Malloc0 00:08:01.720 23:08:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.720 Malloc1 00:08:01.979 23:08:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.979 23:08:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:02.238 /dev/nbd0 00:08:02.238 23:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:02.238 23:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.238 1+0 records in 00:08:02.238 1+0 records out 00:08:02.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749637 s, 5.5 MB/s 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.238 23:08:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:02.238 23:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.238 23:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.238 23:08:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:02.496 /dev/nbd1 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.497 1+0 records in 00:08:02.497 1+0 records out 00:08:02.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484593 s, 8.5 MB/s 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:02.497 23:08:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.497 23:08:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.755 23:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:02.755 { 00:08:02.755 "nbd_device": "/dev/nbd0", 00:08:02.755 "bdev_name": "Malloc0" 00:08:02.755 }, 00:08:02.755 { 00:08:02.755 "nbd_device": "/dev/nbd1", 00:08:02.755 "bdev_name": "Malloc1" 00:08:02.755 } 00:08:02.755 ]' 00:08:02.755 23:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.755 23:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:02.755 { 00:08:02.755 "nbd_device": "/dev/nbd0", 00:08:02.755 "bdev_name": "Malloc0" 00:08:02.755 }, 00:08:02.755 { 00:08:02.755 "nbd_device": "/dev/nbd1", 00:08:02.755 "bdev_name": "Malloc1" 00:08:02.755 } 00:08:02.755 ]' 00:08:02.755 23:08:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:02.755 /dev/nbd1' 00:08:02.755 23:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:02.755 /dev/nbd1' 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:02.756 256+0 records in 00:08:02.756 256+0 records out 00:08:02.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671038 s, 156 MB/s 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:02.756 256+0 records in 00:08:02.756 256+0 records out 00:08:02.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248916 s, 42.1 MB/s 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:02.756 256+0 records in 00:08:02.756 256+0 records out 00:08:02.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308804 s, 34.0 MB/s 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.756 23:08:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.323 23:08:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:03.889 23:08:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:03.889 23:08:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:04.147 23:08:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:04.412 [2024-07-24 23:08:26.808517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.671 [2024-07-24 23:08:26.959799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.671 [2024-07-24 23:08:26.959816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.671 [2024-07-24 23:08:27.040063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:04.671 [2024-07-24 23:08:27.040175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:04.671 [2024-07-24 23:08:27.040192] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:07.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:07.200 23:08:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60332 /var/tmp/spdk-nbd.sock 00:08:07.200 23:08:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60332 ']' 00:08:07.200 23:08:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:07.200 23:08:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.200 23:08:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:07.200 23:08:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.200 23:08:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:07.459 23:08:29 event.app_repeat -- event/event.sh@39 -- # killprocess 60332 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60332 ']' 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60332 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60332 00:08:07.459 killing process with pid 60332 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60332' 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60332 00:08:07.459 23:08:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60332 00:08:07.745 spdk_app_start is called in Round 0. 00:08:07.745 Shutdown signal received, stop current app iteration 00:08:07.745 Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 reinitialization... 00:08:07.745 spdk_app_start is called in Round 1. 00:08:07.745 Shutdown signal received, stop current app iteration 00:08:07.745 Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 reinitialization... 00:08:07.745 spdk_app_start is called in Round 2. 00:08:07.745 Shutdown signal received, stop current app iteration 00:08:07.745 Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 reinitialization... 00:08:07.745 spdk_app_start is called in Round 3. 00:08:07.745 Shutdown signal received, stop current app iteration 00:08:07.745 23:08:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:07.745 23:08:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:07.745 00:08:07.745 real 0m19.045s 00:08:07.745 user 0m42.634s 00:08:07.745 sys 0m3.053s 00:08:07.745 23:08:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.745 23:08:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.745 ************************************ 00:08:07.745 END TEST app_repeat 00:08:07.745 ************************************ 00:08:07.745 23:08:30 event -- common/autotest_common.sh@1142 -- # return 0 00:08:07.745 23:08:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:07.745 23:08:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:07.745 23:08:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.745 23:08:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.745 23:08:30 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.745 ************************************ 00:08:07.745 START TEST cpu_locks 00:08:07.745 ************************************ 00:08:07.745 23:08:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:07.745 * Looking for test storage... 00:08:07.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:07.745 23:08:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:07.745 23:08:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:07.745 23:08:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:07.745 23:08:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:07.746 23:08:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.746 23:08:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.746 23:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.746 ************************************ 00:08:07.746 START TEST default_locks 00:08:07.746 ************************************ 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60763 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60763 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60763 ']' 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.746 23:08:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.027 [2024-07-24 23:08:30.271436] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:08.027 [2024-07-24 23:08:30.271560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60763 ] 00:08:08.027 [2024-07-24 23:08:30.413724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.286 [2024-07-24 23:08:30.547992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.286 [2024-07-24 23:08:30.608807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:08.853 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.853 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:08:08.853 23:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60763 00:08:08.853 23:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60763 00:08:08.853 23:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60763 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60763 ']' 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60763 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60763 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.419 killing process with pid 60763 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60763' 00:08:09.419 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60763 00:08:09.420 23:08:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60763 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60763 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60763 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:09.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60763 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60763 ']' 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.678 ERROR: process (pid: 60763) is no longer running 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.678 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60763) - No such process 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:09.678 ************************************ 00:08:09.678 END TEST default_locks 00:08:09.678 ************************************ 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:09.678 00:08:09.678 real 0m1.869s 00:08:09.678 user 0m2.046s 00:08:09.678 sys 0m0.534s 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.678 23:08:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.678 23:08:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:09.678 23:08:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:09.678 23:08:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:09.678 23:08:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.678 23:08:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.678 ************************************ 00:08:09.678 START TEST default_locks_via_rpc 00:08:09.678 ************************************ 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:08:09.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60809 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60809 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60809 ']' 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.678 23:08:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.937 [2024-07-24 23:08:32.196786] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:09.937 [2024-07-24 23:08:32.196881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60809 ] 00:08:09.937 [2024-07-24 23:08:32.333756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.196 [2024-07-24 23:08:32.447913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.196 [2024-07-24 23:08:32.502632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60809 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60809 00:08:11.135 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60809 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60809 ']' 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60809 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60809 00:08:11.395 killing process with pid 60809 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60809' 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60809 00:08:11.395 23:08:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60809 00:08:12.008 00:08:12.008 real 0m2.033s 00:08:12.008 user 0m2.248s 00:08:12.008 sys 0m0.605s 00:08:12.008 23:08:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.008 ************************************ 00:08:12.008 END TEST default_locks_via_rpc 00:08:12.008 ************************************ 00:08:12.008 23:08:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.008 23:08:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:12.008 23:08:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:12.008 23:08:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:12.008 23:08:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.008 23:08:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:12.008 ************************************ 00:08:12.008 START TEST non_locking_app_on_locked_coremask 00:08:12.008 ************************************ 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60860 00:08:12.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60860 /var/tmp/spdk.sock 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60860 ']' 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.008 23:08:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.008 [2024-07-24 23:08:34.287822] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:12.008 [2024-07-24 23:08:34.287923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:08:12.008 [2024-07-24 23:08:34.429735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.269 [2024-07-24 23:08:34.553694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.269 [2024-07-24 23:08:34.612561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:12.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60881 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60881 /var/tmp/spdk2.sock 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60881 ']' 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.835 23:08:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.094 [2024-07-24 23:08:35.366090] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:13.094 [2024-07-24 23:08:35.368181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:08:13.094 [2024-07-24 23:08:35.511079] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.094 [2024-07-24 23:08:35.511498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.353 [2024-07-24 23:08:35.834605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.611 [2024-07-24 23:08:35.989992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.177 23:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:14.177 23:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:14.177 23:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60860 00:08:14.177 23:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60860 00:08:14.177 23:08:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60860 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60860 ']' 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60860 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60860 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:15.112 killing process with pid 60860 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60860' 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60860 00:08:15.112 23:08:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60860 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60881 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60881 ']' 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60881 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60881 00:08:16.065 killing process with pid 60881 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:16.065 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:16.066 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60881' 00:08:16.066 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60881 00:08:16.066 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60881 00:08:16.324 ************************************ 00:08:16.324 END TEST non_locking_app_on_locked_coremask 00:08:16.324 ************************************ 00:08:16.324 00:08:16.324 real 0m4.395s 00:08:16.324 user 0m4.724s 00:08:16.324 sys 0m1.227s 00:08:16.324 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.324 23:08:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.324 23:08:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:16.324 23:08:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:16.324 23:08:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:16.324 23:08:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.324 23:08:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.324 ************************************ 00:08:16.324 START TEST locking_app_on_unlocked_coremask 00:08:16.324 ************************************ 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60954 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60954 /var/tmp/spdk.sock 00:08:16.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60954 ']' 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.324 23:08:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.324 [2024-07-24 23:08:38.726727] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:16.324 [2024-07-24 23:08:38.726805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60954 ] 00:08:16.583 [2024-07-24 23:08:38.862461] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:16.583 [2024-07-24 23:08:38.863148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.583 [2024-07-24 23:08:39.023062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.841 [2024-07-24 23:08:39.083932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:17.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60970 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60970 /var/tmp/spdk2.sock 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60970 ']' 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:17.408 23:08:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.408 [2024-07-24 23:08:39.880355] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:17.408 [2024-07-24 23:08:39.880465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60970 ] 00:08:17.666 [2024-07-24 23:08:40.029612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.924 [2024-07-24 23:08:40.279597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.924 [2024-07-24 23:08:40.391628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.490 23:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.490 23:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:18.490 23:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60970 00:08:18.490 23:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60970 00:08:18.490 23:08:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60954 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60954 ']' 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60954 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60954 00:08:19.425 killing process with pid 60954 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60954' 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60954 00:08:19.425 23:08:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60954 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60970 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60970 ']' 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60970 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60970 00:08:20.361 killing process with pid 60970 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60970' 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60970 00:08:20.361 23:08:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60970 00:08:20.618 00:08:20.618 real 0m4.426s 00:08:20.619 user 0m5.053s 00:08:20.619 sys 0m1.185s 00:08:20.619 23:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.619 23:08:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.619 ************************************ 00:08:20.619 END TEST locking_app_on_unlocked_coremask 00:08:20.619 ************************************ 00:08:20.876 23:08:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:20.876 23:08:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:20.876 23:08:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.877 23:08:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.877 23:08:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.877 ************************************ 00:08:20.877 START TEST locking_app_on_locked_coremask 00:08:20.877 ************************************ 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:08:20.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61037 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61037 /var/tmp/spdk.sock 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61037 ']' 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.877 23:08:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.877 [2024-07-24 23:08:43.204404] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:20.877 [2024-07-24 23:08:43.205305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:08:20.877 [2024-07-24 23:08:43.343539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.133 [2024-07-24 23:08:43.462501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.134 [2024-07-24 23:08:43.519319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61053 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61053 /var/tmp/spdk2.sock 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61053 /var/tmp/spdk2.sock 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61053 /var/tmp/spdk2.sock 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61053 ']' 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.068 23:08:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.068 [2024-07-24 23:08:44.276638] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:22.068 [2024-07-24 23:08:44.276909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61053 ] 00:08:22.068 [2024-07-24 23:08:44.418787] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61037 has claimed it. 00:08:22.068 [2024-07-24 23:08:44.418861] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:22.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61053) - No such process 00:08:22.635 ERROR: process (pid: 61053) is no longer running 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61037 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61037 00:08:22.635 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61037 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61037 ']' 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61037 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61037 00:08:23.202 killing process with pid 61037 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61037' 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61037 00:08:23.202 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61037 00:08:23.461 ************************************ 00:08:23.461 END TEST locking_app_on_locked_coremask 00:08:23.461 ************************************ 00:08:23.461 00:08:23.461 real 0m2.703s 00:08:23.461 user 0m3.135s 00:08:23.461 sys 0m0.667s 00:08:23.461 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.461 23:08:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.461 23:08:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:23.461 23:08:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:23.461 23:08:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.461 23:08:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.461 23:08:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.461 ************************************ 00:08:23.461 START TEST locking_overlapped_coremask 00:08:23.461 ************************************ 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61104 00:08:23.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61104 /var/tmp/spdk.sock 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61104 ']' 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.461 23:08:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:23.777 [2024-07-24 23:08:45.961361] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:23.777 [2024-07-24 23:08:45.961467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61104 ] 00:08:23.777 [2024-07-24 23:08:46.099068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:23.777 [2024-07-24 23:08:46.222902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.777 [2024-07-24 23:08:46.223055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.777 [2024-07-24 23:08:46.223072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.036 [2024-07-24 23:08:46.281610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61122 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61122 /var/tmp/spdk2.sock 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61122 /var/tmp/spdk2.sock 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61122 /var/tmp/spdk2.sock 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61122 ']' 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:24.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.603 23:08:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.603 [2024-07-24 23:08:46.988867] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:24.603 [2024-07-24 23:08:46.988965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61122 ] 00:08:24.862 [2024-07-24 23:08:47.134994] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61104 has claimed it. 00:08:24.862 [2024-07-24 23:08:47.135098] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:25.430 ERROR: process (pid: 61122) is no longer running 00:08:25.430 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61122) - No such process 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61104 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61104 ']' 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61104 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61104 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61104' 00:08:25.430 killing process with pid 61104 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61104 00:08:25.430 23:08:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61104 00:08:25.689 00:08:25.689 real 0m2.213s 00:08:25.689 user 0m6.026s 00:08:25.690 sys 0m0.450s 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.690 ************************************ 00:08:25.690 END TEST locking_overlapped_coremask 00:08:25.690 ************************************ 00:08:25.690 23:08:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:25.690 23:08:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:25.690 23:08:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:25.690 23:08:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.690 23:08:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:25.690 ************************************ 00:08:25.690 START TEST locking_overlapped_coremask_via_rpc 00:08:25.690 ************************************ 00:08:25.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61162 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61162 /var/tmp/spdk.sock 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61162 ']' 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.690 23:08:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:25.948 [2024-07-24 23:08:48.217214] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:25.948 [2024-07-24 23:08:48.217312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61162 ] 00:08:25.948 [2024-07-24 23:08:48.356246] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:25.948 [2024-07-24 23:08:48.356309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:26.207 [2024-07-24 23:08:48.473749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.207 [2024-07-24 23:08:48.473895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.207 [2024-07-24 23:08:48.473894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.207 [2024-07-24 23:08:48.528488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61180 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61180 /var/tmp/spdk2.sock 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61180 ']' 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.773 23:08:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.107 [2024-07-24 23:08:49.300914] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:27.107 [2024-07-24 23:08:49.301493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61180 ] 00:08:27.107 [2024-07-24 23:08:49.452275] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:27.107 [2024-07-24 23:08:49.452341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:27.375 [2024-07-24 23:08:49.708547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.375 [2024-07-24 23:08:49.712233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.375 [2024-07-24 23:08:49.712236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.375 [2024-07-24 23:08:49.824443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.942 [2024-07-24 23:08:50.356260] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61162 has claimed it. 00:08:27.942 request: 00:08:27.942 { 00:08:27.942 "method": "framework_enable_cpumask_locks", 00:08:27.942 "req_id": 1 00:08:27.942 } 00:08:27.942 Got JSON-RPC error response 00:08:27.942 response: 00:08:27.942 { 00:08:27.942 "code": -32603, 00:08:27.942 "message": "Failed to claim CPU core: 2" 00:08:27.942 } 00:08:27.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61162 /var/tmp/spdk.sock 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61162 ']' 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.942 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61180 /var/tmp/spdk2.sock 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61180 ']' 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.201 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:28.460 00:08:28.460 real 0m2.764s 00:08:28.460 user 0m1.481s 00:08:28.460 sys 0m0.210s 00:08:28.460 ************************************ 00:08:28.460 END TEST locking_overlapped_coremask_via_rpc 00:08:28.460 ************************************ 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.460 23:08:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:28.839 23:08:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:28.839 23:08:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:28.839 23:08:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61162 ]] 00:08:28.839 23:08:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61162 00:08:28.839 23:08:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61162 ']' 00:08:28.839 23:08:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61162 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61162 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61162' 00:08:28.840 killing process with pid 61162 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61162 00:08:28.840 23:08:50 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61162 00:08:29.104 23:08:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61180 ]] 00:08:29.104 23:08:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61180 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61180 ']' 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61180 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61180 00:08:29.104 killing process with pid 61180 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61180' 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61180 00:08:29.104 23:08:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61180 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61162 ]] 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61162 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61162 ']' 00:08:29.671 Process with pid 61162 is not found 00:08:29.671 Process with pid 61180 is not found 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61162 00:08:29.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61162) - No such process 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61162 is not found' 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61180 ]] 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61180 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61180 ']' 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61180 00:08:29.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61180) - No such process 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61180 is not found' 00:08:29.671 23:08:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:29.671 00:08:29.671 real 0m21.769s 00:08:29.671 user 0m37.717s 00:08:29.671 sys 0m5.753s 00:08:29.671 ************************************ 00:08:29.671 END TEST cpu_locks 00:08:29.671 ************************************ 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.671 23:08:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.671 23:08:51 event -- common/autotest_common.sh@1142 -- # return 0 00:08:29.671 ************************************ 00:08:29.671 END TEST event 00:08:29.671 ************************************ 00:08:29.671 00:08:29.671 real 0m49.874s 00:08:29.671 user 1m35.620s 00:08:29.671 sys 0m9.616s 00:08:29.671 23:08:51 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.671 23:08:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.671 23:08:51 -- common/autotest_common.sh@1142 -- # return 0 00:08:29.671 23:08:51 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:29.671 23:08:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:29.671 23:08:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.671 23:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:29.671 ************************************ 00:08:29.671 START TEST thread 00:08:29.671 ************************************ 00:08:29.671 23:08:51 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:29.671 * Looking for test storage... 00:08:29.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:29.671 23:08:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:29.671 23:08:52 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:29.671 23:08:52 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.671 23:08:52 thread -- common/autotest_common.sh@10 -- # set +x 00:08:29.671 ************************************ 00:08:29.671 START TEST thread_poller_perf 00:08:29.671 ************************************ 00:08:29.672 23:08:52 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:29.672 [2024-07-24 23:08:52.078121] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:29.672 [2024-07-24 23:08:52.078330] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61307 ] 00:08:29.929 [2024-07-24 23:08:52.212627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.929 [2024-07-24 23:08:52.339259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.929 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:31.305 ====================================== 00:08:31.305 busy:2208916654 (cyc) 00:08:31.305 total_run_count: 301000 00:08:31.305 tsc_hz: 2200000000 (cyc) 00:08:31.305 ====================================== 00:08:31.305 poller_cost: 7338 (cyc), 3335 (nsec) 00:08:31.305 00:08:31.305 real 0m1.417s 00:08:31.305 user 0m1.243s 00:08:31.305 sys 0m0.065s 00:08:31.305 23:08:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.305 23:08:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:31.305 ************************************ 00:08:31.305 END TEST thread_poller_perf 00:08:31.305 ************************************ 00:08:31.305 23:08:53 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:31.305 23:08:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:31.305 23:08:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:31.305 23:08:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.305 23:08:53 thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.305 ************************************ 00:08:31.305 START TEST thread_poller_perf 00:08:31.305 ************************************ 00:08:31.305 23:08:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:31.305 [2024-07-24 23:08:53.559095] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:31.305 [2024-07-24 23:08:53.559263] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61338 ] 00:08:31.305 [2024-07-24 23:08:53.692549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.562 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:31.562 [2024-07-24 23:08:53.852282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.495 ====================================== 00:08:32.495 busy:2202430639 (cyc) 00:08:32.495 total_run_count: 3930000 00:08:32.495 tsc_hz: 2200000000 (cyc) 00:08:32.495 ====================================== 00:08:32.495 poller_cost: 560 (cyc), 254 (nsec) 00:08:32.754 ************************************ 00:08:32.754 END TEST thread_poller_perf 00:08:32.754 ************************************ 00:08:32.754 00:08:32.754 real 0m1.444s 00:08:32.754 user 0m1.257s 00:08:32.754 sys 0m0.077s 00:08:32.754 23:08:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.754 23:08:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.754 23:08:55 thread -- common/autotest_common.sh@1142 -- # return 0 00:08:32.754 23:08:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:32.754 ************************************ 00:08:32.754 END TEST thread 00:08:32.754 ************************************ 00:08:32.754 00:08:32.754 real 0m3.067s 00:08:32.754 user 0m2.571s 00:08:32.754 sys 0m0.271s 00:08:32.754 23:08:55 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.754 23:08:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.754 23:08:55 -- common/autotest_common.sh@1142 -- # return 0 00:08:32.754 23:08:55 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:32.754 23:08:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.754 23:08:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.754 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:08:32.754 ************************************ 00:08:32.754 START TEST accel 00:08:32.754 ************************************ 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:08:32.754 * Looking for test storage... 00:08:32.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:32.754 23:08:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:32.754 23:08:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:32.754 23:08:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:32.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.754 23:08:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61413 00:08:32.754 23:08:55 accel -- accel/accel.sh@63 -- # waitforlisten 61413 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@829 -- # '[' -z 61413 ']' 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.754 23:08:55 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.754 23:08:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:32.754 23:08:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.754 23:08:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.754 23:08:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.754 23:08:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.755 23:08:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.755 23:08:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.755 23:08:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:32.755 23:08:55 accel -- accel/accel.sh@41 -- # jq -r . 00:08:32.755 [2024-07-24 23:08:55.234970] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:32.755 [2024-07-24 23:08:55.235116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61413 ] 00:08:33.014 [2024-07-24 23:08:55.376666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.273 [2024-07-24 23:08:55.533471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.273 [2024-07-24 23:08:55.614900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:33.840 23:08:56 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.840 23:08:56 accel -- common/autotest_common.sh@862 -- # return 0 00:08:33.840 23:08:56 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:33.840 23:08:56 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:33.840 23:08:56 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:33.840 23:08:56 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:33.840 23:08:56 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:33.840 23:08:56 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:33.840 23:08:56 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.840 23:08:56 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:33.840 23:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.840 23:08:56 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.840 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.840 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.840 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:33.841 23:08:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:33.841 23:08:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:33.841 23:08:56 accel -- accel/accel.sh@75 -- # killprocess 61413 00:08:33.841 23:08:56 accel -- common/autotest_common.sh@948 -- # '[' -z 61413 ']' 00:08:33.841 23:08:56 accel -- common/autotest_common.sh@952 -- # kill -0 61413 00:08:33.841 23:08:56 accel -- common/autotest_common.sh@953 -- # uname 00:08:33.841 23:08:56 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.841 23:08:56 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61413 00:08:34.099 23:08:56 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.099 23:08:56 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.099 23:08:56 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61413' 00:08:34.099 killing process with pid 61413 00:08:34.099 23:08:56 accel -- common/autotest_common.sh@967 -- # kill 61413 00:08:34.099 23:08:56 accel -- common/autotest_common.sh@972 -- # wait 61413 00:08:34.666 23:08:56 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:34.666 23:08:56 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.666 23:08:56 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:34.666 23:08:56 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:34.666 23:08:56 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.666 23:08:56 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:34.666 23:08:56 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.666 23:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.666 ************************************ 00:08:34.666 START TEST accel_missing_filename 00:08:34.666 ************************************ 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:34.666 23:08:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:34.666 23:08:56 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:34.666 [2024-07-24 23:08:57.013535] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:34.666 [2024-07-24 23:08:57.013636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61470 ] 00:08:34.925 [2024-07-24 23:08:57.151361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.925 [2024-07-24 23:08:57.301987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.925 [2024-07-24 23:08:57.381253] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.183 [2024-07-24 23:08:57.483364] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:35.183 A filename is required. 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.183 00:08:35.183 real 0m0.589s 00:08:35.183 user 0m0.380s 00:08:35.183 sys 0m0.151s 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.183 23:08:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:35.183 ************************************ 00:08:35.183 END TEST accel_missing_filename 00:08:35.183 ************************************ 00:08:35.183 23:08:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.183 23:08:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.183 23:08:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:35.183 23:08:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.184 23:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.184 ************************************ 00:08:35.184 START TEST accel_compress_verify 00:08:35.184 ************************************ 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.184 23:08:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:35.184 23:08:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:35.184 [2024-07-24 23:08:57.647401] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:35.184 [2024-07-24 23:08:57.648120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61494 ] 00:08:35.442 [2024-07-24 23:08:57.783318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.442 [2024-07-24 23:08:57.903124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.700 [2024-07-24 23:08:57.963279] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.700 [2024-07-24 23:08:58.040040] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:35.700 00:08:35.700 Compression does not support the verify option, aborting. 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.700 00:08:35.700 real 0m0.525s 00:08:35.700 user 0m0.344s 00:08:35.700 sys 0m0.123s 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.700 23:08:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:35.700 ************************************ 00:08:35.700 END TEST accel_compress_verify 00:08:35.700 ************************************ 00:08:35.959 23:08:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.959 23:08:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:35.959 23:08:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:35.959 23:08:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.959 23:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.959 ************************************ 00:08:35.959 START TEST accel_wrong_workload 00:08:35.959 ************************************ 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.959 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:35.959 23:08:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:35.959 Unsupported workload type: foobar 00:08:35.959 [2024-07-24 23:08:58.230886] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:35.959 accel_perf options: 00:08:35.959 [-h help message] 00:08:35.959 [-q queue depth per core] 00:08:35.959 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:35.959 [-T number of threads per core 00:08:35.959 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:35.959 [-t time in seconds] 00:08:35.959 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:35.960 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:35.960 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:35.960 [-l for compress/decompress workloads, name of uncompressed input file 00:08:35.960 [-S for crc32c workload, use this seed value (default 0) 00:08:35.960 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:35.960 [-f for fill workload, use this BYTE value (default 255) 00:08:35.960 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:35.960 [-y verify result if this switch is on] 00:08:35.960 [-a tasks to allocate per core (default: same value as -q)] 00:08:35.960 Can be used to spread operations across a wider range of memory. 00:08:35.960 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:35.960 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.960 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:35.960 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.960 00:08:35.960 real 0m0.031s 00:08:35.960 user 0m0.018s 00:08:35.960 sys 0m0.012s 00:08:35.960 ************************************ 00:08:35.960 END TEST accel_wrong_workload 00:08:35.960 ************************************ 00:08:35.960 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.960 23:08:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.960 23:08:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.960 ************************************ 00:08:35.960 START TEST accel_negative_buffers 00:08:35.960 ************************************ 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:35.960 23:08:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:35.960 -x option must be non-negative. 00:08:35.960 [2024-07-24 23:08:58.314054] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:35.960 accel_perf options: 00:08:35.960 [-h help message] 00:08:35.960 [-q queue depth per core] 00:08:35.960 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:35.960 [-T number of threads per core 00:08:35.960 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:35.960 [-t time in seconds] 00:08:35.960 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:35.960 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:35.960 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:35.960 [-l for compress/decompress workloads, name of uncompressed input file 00:08:35.960 [-S for crc32c workload, use this seed value (default 0) 00:08:35.960 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:35.960 [-f for fill workload, use this BYTE value (default 255) 00:08:35.960 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:35.960 [-y verify result if this switch is on] 00:08:35.960 [-a tasks to allocate per core (default: same value as -q)] 00:08:35.960 Can be used to spread operations across a wider range of memory. 00:08:35.960 ************************************ 00:08:35.960 END TEST accel_negative_buffers 00:08:35.960 ************************************ 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:35.960 00:08:35.960 real 0m0.032s 00:08:35.960 user 0m0.016s 00:08:35.960 sys 0m0.015s 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.960 23:08:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:35.960 23:08:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.960 23:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.960 ************************************ 00:08:35.960 START TEST accel_crc32c 00:08:35.960 ************************************ 00:08:35.960 23:08:58 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:35.960 23:08:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:35.960 [2024-07-24 23:08:58.389283] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:35.960 [2024-07-24 23:08:58.389378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61557 ] 00:08:36.219 [2024-07-24 23:08:58.524041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.219 [2024-07-24 23:08:58.641577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.219 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.478 23:08:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.412 23:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:37.413 ************************************ 00:08:37.413 END TEST accel_crc32c 00:08:37.413 ************************************ 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:37.413 23:08:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.413 00:08:37.413 real 0m1.503s 00:08:37.413 user 0m1.295s 00:08:37.413 sys 0m0.114s 00:08:37.413 23:08:59 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.413 23:08:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:37.671 23:08:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:37.671 23:08:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:37.671 23:08:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:37.671 23:08:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.671 23:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.671 ************************************ 00:08:37.671 START TEST accel_crc32c_C2 00:08:37.671 ************************************ 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:37.671 23:08:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:37.671 [2024-07-24 23:08:59.945701] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:37.671 [2024-07-24 23:08:59.945794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:08:37.671 [2024-07-24 23:09:00.081004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.930 [2024-07-24 23:09:00.198146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:37.930 23:09:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.306 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.306 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.306 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.306 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.306 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.307 ************************************ 00:08:39.307 END TEST accel_crc32c_C2 00:08:39.307 ************************************ 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.307 00:08:39.307 real 0m1.511s 00:08:39.307 user 0m1.297s 00:08:39.307 sys 0m0.118s 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.307 23:09:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:39.307 23:09:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:39.307 23:09:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:39.307 23:09:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:39.307 23:09:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.307 23:09:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.307 ************************************ 00:08:39.307 START TEST accel_copy 00:08:39.307 ************************************ 00:08:39.307 23:09:01 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:39.307 23:09:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:39.307 [2024-07-24 23:09:01.511009] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:39.307 [2024-07-24 23:09:01.511089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61622 ] 00:08:39.307 [2024-07-24 23:09:01.645017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.307 [2024-07-24 23:09:01.774172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:39.566 23:09:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:40.944 23:09:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.944 00:08:40.944 real 0m1.520s 00:08:40.944 user 0m1.310s 00:08:40.944 sys 0m0.116s 00:08:40.944 ************************************ 00:08:40.944 END TEST accel_copy 00:08:40.944 ************************************ 00:08:40.944 23:09:03 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.944 23:09:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:40.944 23:09:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:40.944 23:09:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:40.944 23:09:03 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:40.944 23:09:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.944 23:09:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.944 ************************************ 00:08:40.944 START TEST accel_fill 00:08:40.944 ************************************ 00:08:40.944 23:09:03 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:40.944 [2024-07-24 23:09:03.084216] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:40.944 [2024-07-24 23:09:03.084301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61662 ] 00:08:40.944 [2024-07-24 23:09:03.221995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.944 [2024-07-24 23:09:03.337456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.944 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.945 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.945 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:40.945 23:09:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:40.945 23:09:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:40.945 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:40.945 23:09:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:42.366 ************************************ 00:08:42.366 END TEST accel_fill 00:08:42.366 ************************************ 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:42.366 23:09:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.366 00:08:42.366 real 0m1.504s 00:08:42.366 user 0m1.297s 00:08:42.366 sys 0m0.111s 00:08:42.366 23:09:04 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.366 23:09:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:42.366 23:09:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:42.366 23:09:04 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:42.366 23:09:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:42.366 23:09:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.366 23:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.366 ************************************ 00:08:42.366 START TEST accel_copy_crc32c 00:08:42.366 ************************************ 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:42.366 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:42.366 [2024-07-24 23:09:04.633083] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:42.366 [2024-07-24 23:09:04.633212] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61691 ] 00:08:42.366 [2024-07-24 23:09:04.767830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.624 [2024-07-24 23:09:04.892400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:42.624 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:42.625 23:09:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:43.998 ************************************ 00:08:43.998 END TEST accel_copy_crc32c 00:08:43.998 ************************************ 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.998 00:08:43.998 real 0m1.506s 00:08:43.998 user 0m1.295s 00:08:43.998 sys 0m0.113s 00:08:43.998 23:09:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.999 23:09:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:43.999 23:09:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:43.999 23:09:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:43.999 23:09:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:43.999 23:09:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.999 23:09:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:43.999 ************************************ 00:08:43.999 START TEST accel_copy_crc32c_C2 00:08:43.999 ************************************ 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:43.999 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:43.999 [2024-07-24 23:09:06.194224] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:43.999 [2024-07-24 23:09:06.194307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61731 ] 00:08:43.999 [2024-07-24 23:09:06.335354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.999 [2024-07-24 23:09:06.458146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:44.258 23:09:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.631 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.632 00:08:45.632 real 0m1.514s 00:08:45.632 user 0m1.298s 00:08:45.632 sys 0m0.120s 00:08:45.632 ************************************ 00:08:45.632 END TEST accel_copy_crc32c_C2 00:08:45.632 ************************************ 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.632 23:09:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:45.632 23:09:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:45.632 23:09:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:45.632 23:09:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:45.632 23:09:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.632 23:09:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:45.632 ************************************ 00:08:45.632 START TEST accel_dualcast 00:08:45.632 ************************************ 00:08:45.632 23:09:07 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:45.632 23:09:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:45.632 [2024-07-24 23:09:07.759633] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:45.632 [2024-07-24 23:09:07.759726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61760 ] 00:08:45.632 [2024-07-24 23:09:07.896627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.632 [2024-07-24 23:09:08.007046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:45.632 23:09:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.004 23:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:47.005 23:09:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.005 00:08:47.005 real 0m1.492s 00:08:47.005 user 0m1.281s 00:08:47.005 sys 0m0.115s 00:08:47.005 23:09:09 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.005 23:09:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:47.005 ************************************ 00:08:47.005 END TEST accel_dualcast 00:08:47.005 ************************************ 00:08:47.005 23:09:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:47.005 23:09:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:47.005 23:09:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:47.005 23:09:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.005 23:09:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.005 ************************************ 00:08:47.005 START TEST accel_compare 00:08:47.005 ************************************ 00:08:47.005 23:09:09 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:47.005 23:09:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:47.005 [2024-07-24 23:09:09.305465] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:47.005 [2024-07-24 23:09:09.306190] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61800 ] 00:08:47.005 [2024-07-24 23:09:09.448822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.263 [2024-07-24 23:09:09.574575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:47.263 23:09:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:48.636 23:09:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.636 00:08:48.636 real 0m1.522s 00:08:48.636 user 0m1.304s 00:08:48.636 sys 0m0.118s 00:08:48.636 ************************************ 00:08:48.636 END TEST accel_compare 00:08:48.636 ************************************ 00:08:48.636 23:09:10 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.636 23:09:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 23:09:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:48.636 23:09:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:48.636 23:09:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:48.636 23:09:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.636 23:09:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 ************************************ 00:08:48.636 START TEST accel_xor 00:08:48.636 ************************************ 00:08:48.636 23:09:10 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:48.636 23:09:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:48.636 [2024-07-24 23:09:10.882197] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:48.636 [2024-07-24 23:09:10.883639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61829 ] 00:08:48.636 [2024-07-24 23:09:11.040288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.894 [2024-07-24 23:09:11.184748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:48.894 23:09:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.266 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.267 ************************************ 00:08:50.267 END TEST accel_xor 00:08:50.267 ************************************ 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:50.267 00:08:50.267 real 0m1.559s 00:08:50.267 user 0m1.349s 00:08:50.267 sys 0m0.112s 00:08:50.267 23:09:12 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.267 23:09:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:50.267 23:09:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:50.267 23:09:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:50.267 23:09:12 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:50.267 23:09:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.267 23:09:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.267 ************************************ 00:08:50.267 START TEST accel_xor 00:08:50.267 ************************************ 00:08:50.267 23:09:12 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:50.267 23:09:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:50.267 [2024-07-24 23:09:12.481548] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:50.267 [2024-07-24 23:09:12.481649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61869 ] 00:08:50.267 [2024-07-24 23:09:12.618240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.267 [2024-07-24 23:09:12.747035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.524 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:50.525 23:09:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:51.513 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:51.514 ************************************ 00:08:51.514 END TEST accel_xor 00:08:51.514 ************************************ 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:51.514 23:09:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:51.514 00:08:51.514 real 0m1.537s 00:08:51.514 user 0m1.332s 00:08:51.514 sys 0m0.106s 00:08:51.514 23:09:13 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.514 23:09:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:51.772 23:09:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:51.772 23:09:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:51.772 23:09:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:51.772 23:09:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.772 23:09:14 accel -- common/autotest_common.sh@10 -- # set +x 00:08:51.772 ************************************ 00:08:51.772 START TEST accel_dif_verify 00:08:51.772 ************************************ 00:08:51.772 23:09:14 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:51.772 23:09:14 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:51.772 [2024-07-24 23:09:14.066871] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:51.772 [2024-07-24 23:09:14.066962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61898 ] 00:08:51.772 [2024-07-24 23:09:14.205854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.030 [2024-07-24 23:09:14.323747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.030 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:52.031 23:09:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:53.403 23:09:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:53.403 00:08:53.403 real 0m1.510s 00:08:53.403 user 0m1.299s 00:08:53.403 sys 0m0.117s 00:08:53.403 23:09:15 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.403 ************************************ 00:08:53.403 END TEST accel_dif_verify 00:08:53.403 ************************************ 00:08:53.403 23:09:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:53.403 23:09:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:53.403 23:09:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:53.403 23:09:15 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:53.403 23:09:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.403 23:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:53.403 ************************************ 00:08:53.403 START TEST accel_dif_generate 00:08:53.403 ************************************ 00:08:53.403 23:09:15 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:08:53.403 23:09:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:53.403 23:09:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:53.403 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.403 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.403 23:09:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:53.403 23:09:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:53.404 23:09:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:53.404 [2024-07-24 23:09:15.626248] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:53.404 [2024-07-24 23:09:15.626353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61940 ] 00:08:53.404 [2024-07-24 23:09:15.764386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.404 [2024-07-24 23:09:15.875037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.662 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:53.663 23:09:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:55.038 23:09:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:55.038 00:08:55.038 real 0m1.500s 00:08:55.038 user 0m1.294s 00:08:55.038 sys 0m0.116s 00:08:55.038 23:09:17 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.038 ************************************ 00:08:55.038 END TEST accel_dif_generate 00:08:55.038 ************************************ 00:08:55.038 23:09:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:55.038 23:09:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:55.038 23:09:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:55.038 23:09:17 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:55.038 23:09:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.038 23:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:08:55.038 ************************************ 00:08:55.038 START TEST accel_dif_generate_copy 00:08:55.038 ************************************ 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:55.038 [2024-07-24 23:09:17.173873] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:55.038 [2024-07-24 23:09:17.173999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61975 ] 00:08:55.038 [2024-07-24 23:09:17.312311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.038 [2024-07-24 23:09:17.456073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.038 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.039 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:55.297 23:09:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:56.232 00:08:56.232 real 0m1.536s 00:08:56.232 user 0m1.323s 00:08:56.232 sys 0m0.115s 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.232 ************************************ 00:08:56.232 END TEST accel_dif_generate_copy 00:08:56.232 ************************************ 00:08:56.232 23:09:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:56.491 23:09:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:56.491 23:09:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:56.491 23:09:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:56.491 23:09:18 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:56.491 23:09:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.491 23:09:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:56.491 ************************************ 00:08:56.491 START TEST accel_comp 00:08:56.491 ************************************ 00:08:56.491 23:09:18 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:56.491 23:09:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:56.491 [2024-07-24 23:09:18.762864] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:56.491 [2024-07-24 23:09:18.762955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62009 ] 00:08:56.491 [2024-07-24 23:09:18.898620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.750 [2024-07-24 23:09:19.026364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:56.750 23:09:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.125 23:09:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:58.126 23:09:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.126 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:58.126 23:09:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:58.126 23:09:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:58.126 23:09:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:58.126 23:09:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:58.126 00:08:58.126 real 0m1.516s 00:08:58.126 user 0m0.014s 00:08:58.126 sys 0m0.001s 00:08:58.126 23:09:20 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.126 23:09:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:58.126 ************************************ 00:08:58.126 END TEST accel_comp 00:08:58.126 ************************************ 00:08:58.126 23:09:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:58.126 23:09:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:58.126 23:09:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:58.126 23:09:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.126 23:09:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:58.126 ************************************ 00:08:58.126 START TEST accel_decomp 00:08:58.126 ************************************ 00:08:58.126 23:09:20 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:58.126 23:09:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:58.126 [2024-07-24 23:09:20.328048] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:58.126 [2024-07-24 23:09:20.328194] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62045 ] 00:08:58.126 [2024-07-24 23:09:20.469955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.126 [2024-07-24 23:09:20.609748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.384 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:58.385 23:09:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:59.760 23:09:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.760 00:08:59.760 real 0m1.537s 00:08:59.760 user 0m1.313s 00:08:59.760 sys 0m0.129s 00:08:59.760 23:09:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.760 ************************************ 00:08:59.760 END TEST accel_decomp 00:08:59.760 ************************************ 00:08:59.760 23:09:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:59.760 23:09:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:59.760 23:09:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:59.760 23:09:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:59.760 23:09:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.760 23:09:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.760 ************************************ 00:08:59.760 START TEST accel_decomp_full 00:08:59.760 ************************************ 00:08:59.760 23:09:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:59.760 23:09:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:59.760 23:09:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:59.760 23:09:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.760 23:09:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:59.761 23:09:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:59.761 [2024-07-24 23:09:21.907759] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:08:59.761 [2024-07-24 23:09:21.907850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62079 ] 00:08:59.761 [2024-07-24 23:09:22.042830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.761 [2024-07-24 23:09:22.180350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:59.761 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.020 23:09:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:00.953 23:09:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:00.954 23:09:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:00.954 23:09:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:00.954 23:09:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:00.954 00:09:00.954 real 0m1.536s 00:09:00.954 user 0m0.013s 00:09:00.954 sys 0m0.002s 00:09:00.954 23:09:23 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.954 23:09:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:00.954 ************************************ 00:09:00.954 END TEST accel_decomp_full 00:09:00.954 ************************************ 00:09:01.211 23:09:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:01.211 23:09:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:01.211 23:09:23 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:01.211 23:09:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.211 23:09:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.211 ************************************ 00:09:01.211 START TEST accel_decomp_mcore 00:09:01.211 ************************************ 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:01.211 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:01.211 [2024-07-24 23:09:23.495203] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:01.212 [2024-07-24 23:09:23.495291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62114 ] 00:09:01.212 [2024-07-24 23:09:23.630591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.470 [2024-07-24 23:09:23.771507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.470 [2024-07-24 23:09:23.771594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.470 [2024-07-24 23:09:23.772476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.470 [2024-07-24 23:09:23.772511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.470 23:09:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.844 00:09:02.844 real 0m1.617s 00:09:02.844 user 0m0.014s 00:09:02.844 sys 0m0.004s 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.844 23:09:25 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:02.844 ************************************ 00:09:02.844 END TEST accel_decomp_mcore 00:09:02.844 ************************************ 00:09:02.844 23:09:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:02.844 23:09:25 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:02.844 23:09:25 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:02.844 23:09:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.844 23:09:25 accel -- common/autotest_common.sh@10 -- # set +x 00:09:02.844 ************************************ 00:09:02.844 START TEST accel_decomp_full_mcore 00:09:02.844 ************************************ 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:02.845 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:02.845 [2024-07-24 23:09:25.159471] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:02.845 [2024-07-24 23:09:25.159597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62153 ] 00:09:02.845 [2024-07-24 23:09:25.297396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.104 [2024-07-24 23:09:25.455786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.104 [2024-07-24 23:09:25.455984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.104 [2024-07-24 23:09:25.455902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.104 [2024-07-24 23:09:25.455976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:03.104 23:09:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.478 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:04.479 00:09:04.479 real 0m1.580s 00:09:04.479 user 0m4.778s 00:09:04.479 sys 0m0.147s 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.479 ************************************ 00:09:04.479 23:09:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:04.479 END TEST accel_decomp_full_mcore 00:09:04.479 ************************************ 00:09:04.479 23:09:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:04.479 23:09:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:04.479 23:09:26 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:09:04.479 23:09:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.479 23:09:26 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.479 ************************************ 00:09:04.479 START TEST accel_decomp_mthread 00:09:04.479 ************************************ 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:04.479 23:09:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:04.479 [2024-07-24 23:09:26.768741] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:04.479 [2024-07-24 23:09:26.768821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62191 ] 00:09:04.479 [2024-07-24 23:09:26.903510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.737 [2024-07-24 23:09:27.040044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.737 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.738 23:09:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.127 00:09:06.127 real 0m1.534s 00:09:06.127 user 0m1.317s 00:09:06.127 sys 0m0.118s 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.127 23:09:28 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:06.127 ************************************ 00:09:06.127 END TEST accel_decomp_mthread 00:09:06.127 ************************************ 00:09:06.127 23:09:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:06.127 23:09:28 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:06.128 23:09:28 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:09:06.128 23:09:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.128 23:09:28 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.128 ************************************ 00:09:06.128 START TEST accel_decomp_full_mthread 00:09:06.128 ************************************ 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:06.128 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:06.128 [2024-07-24 23:09:28.356216] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:06.128 [2024-07-24 23:09:28.356353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62225 ] 00:09:06.128 [2024-07-24 23:09:28.496897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.386 [2024-07-24 23:09:28.622162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.386 23:09:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:07.761 23:09:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:07.761 00:09:07.762 real 0m1.561s 00:09:07.762 user 0m1.340s 00:09:07.762 sys 0m0.125s 00:09:07.762 23:09:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.762 ************************************ 00:09:07.762 END TEST accel_decomp_full_mthread 00:09:07.762 23:09:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:07.762 ************************************ 00:09:07.762 23:09:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:07.762 23:09:29 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:07.762 23:09:29 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:07.762 23:09:29 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:07.762 23:09:29 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:07.762 23:09:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:07.762 23:09:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.762 23:09:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:07.762 23:09:29 accel -- common/autotest_common.sh@10 -- # set +x 00:09:07.762 23:09:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:07.762 23:09:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:07.762 23:09:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:07.762 23:09:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:07.762 23:09:29 accel -- accel/accel.sh@41 -- # jq -r . 00:09:07.762 ************************************ 00:09:07.762 START TEST accel_dif_functional_tests 00:09:07.762 ************************************ 00:09:07.762 23:09:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:07.762 [2024-07-24 23:09:30.000260] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:07.762 [2024-07-24 23:09:30.001101] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62261 ] 00:09:07.762 [2024-07-24 23:09:30.145657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.019 [2024-07-24 23:09:30.343855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.019 [2024-07-24 23:09:30.344005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.019 [2024-07-24 23:09:30.344003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.019 [2024-07-24 23:09:30.428792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:08.019 00:09:08.019 00:09:08.019 CUnit - A unit testing framework for C - Version 2.1-3 00:09:08.019 http://cunit.sourceforge.net/ 00:09:08.019 00:09:08.019 00:09:08.019 Suite: accel_dif 00:09:08.019 Test: verify: DIF generated, GUARD check ...passed 00:09:08.019 Test: verify: DIF generated, APPTAG check ...passed 00:09:08.019 Test: verify: DIF generated, REFTAG check ...passed 00:09:08.019 Test: verify: DIF not generated, GUARD check ...[2024-07-24 23:09:30.484475] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:08.019 passed 00:09:08.019 Test: verify: DIF not generated, APPTAG check ...passed 00:09:08.019 Test: verify: DIF not generated, REFTAG check ...passed 00:09:08.019 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:08.019 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:09:08.019 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:08.019 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-24 23:09:30.484561] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:08.019 [2024-07-24 23:09:30.484593] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:08.019 [2024-07-24 23:09:30.484654] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:08.019 passed 00:09:08.019 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:08.019 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:08.019 Test: verify copy: DIF generated, GUARD check ...passed 00:09:08.019 Test: verify copy: DIF generated, APPTAG check ...[2024-07-24 23:09:30.484795] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:08.019 passed 00:09:08.019 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:08.019 Test: verify copy: DIF not generated, GUARD check ...passed 00:09:08.019 Test: verify copy: DIF not generated, APPTAG check ...passed 00:09:08.019 Test: verify copy: DIF not generated, REFTAG check ...passed 00:09:08.019 Test: generate copy: DIF generated, GUARD check ...[2024-07-24 23:09:30.484985] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:08.019 [2024-07-24 23:09:30.485023] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:08.019 [2024-07-24 23:09:30.485053] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:08.019 passed 00:09:08.019 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:08.019 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:08.019 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:08.019 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:08.019 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:08.019 Test: generate copy: iovecs-len validate ...[2024-07-24 23:09:30.485301] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:08.019 passed 00:09:08.019 Test: generate copy: buffer alignment validate ...passed 00:09:08.019 00:09:08.019 Run Summary: Type Total Ran Passed Failed Inactive 00:09:08.019 suites 1 1 n/a 0 0 00:09:08.019 tests 26 26 26 0 0 00:09:08.019 asserts 115 115 115 0 n/a 00:09:08.019 00:09:08.019 Elapsed time = 0.002 seconds 00:09:08.600 00:09:08.600 real 0m0.948s 00:09:08.600 user 0m1.353s 00:09:08.600 sys 0m0.233s 00:09:08.600 23:09:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.600 ************************************ 00:09:08.600 END TEST accel_dif_functional_tests 00:09:08.600 23:09:30 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:08.600 ************************************ 00:09:08.600 23:09:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:08.600 00:09:08.600 real 0m35.853s 00:09:08.600 user 0m37.680s 00:09:08.600 sys 0m4.203s 00:09:08.600 23:09:30 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.600 ************************************ 00:09:08.600 END TEST accel 00:09:08.600 ************************************ 00:09:08.600 23:09:30 accel -- common/autotest_common.sh@10 -- # set +x 00:09:08.600 23:09:30 -- common/autotest_common.sh@1142 -- # return 0 00:09:08.600 23:09:30 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:08.600 23:09:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:08.600 23:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.600 23:09:30 -- common/autotest_common.sh@10 -- # set +x 00:09:08.600 ************************************ 00:09:08.600 START TEST accel_rpc 00:09:08.600 ************************************ 00:09:08.600 23:09:30 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:08.600 * Looking for test storage... 00:09:08.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:08.600 23:09:31 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:08.600 23:09:31 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62335 00:09:08.600 23:09:31 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62335 00:09:08.600 23:09:31 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:08.600 23:09:31 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62335 ']' 00:09:08.600 23:09:31 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.600 23:09:31 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.600 23:09:31 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.600 23:09:31 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.600 23:09:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.858 [2024-07-24 23:09:31.127703] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:08.858 [2024-07-24 23:09:31.127831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62335 ] 00:09:08.858 [2024-07-24 23:09:31.262030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.116 [2024-07-24 23:09:31.390313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.683 23:09:32 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.683 23:09:32 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:09.683 23:09:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:09.683 23:09:32 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:09.683 23:09:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:09.683 23:09:32 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:09.683 23:09:32 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:09.683 23:09:32 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:09.683 23:09:32 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.683 23:09:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.683 ************************************ 00:09:09.683 START TEST accel_assign_opcode 00:09:09.683 ************************************ 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.683 [2024-07-24 23:09:32.062835] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.683 [2024-07-24 23:09:32.070825] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.683 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.683 [2024-07-24 23:09:32.131355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.940 software 00:09:09.940 ************************************ 00:09:09.940 END TEST accel_assign_opcode 00:09:09.940 ************************************ 00:09:09.940 00:09:09.940 real 0m0.291s 00:09:09.940 user 0m0.053s 00:09:09.940 sys 0m0.013s 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.940 23:09:32 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:09.940 23:09:32 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62335 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62335 ']' 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62335 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62335 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:09.940 killing process with pid 62335 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62335' 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@967 -- # kill 62335 00:09:09.940 23:09:32 accel_rpc -- common/autotest_common.sh@972 -- # wait 62335 00:09:10.508 00:09:10.508 real 0m1.858s 00:09:10.508 user 0m1.934s 00:09:10.508 sys 0m0.424s 00:09:10.508 ************************************ 00:09:10.508 END TEST accel_rpc 00:09:10.508 ************************************ 00:09:10.508 23:09:32 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.508 23:09:32 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.508 23:09:32 -- common/autotest_common.sh@1142 -- # return 0 00:09:10.508 23:09:32 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:10.508 23:09:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.508 23:09:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.508 23:09:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.508 ************************************ 00:09:10.508 START TEST app_cmdline 00:09:10.508 ************************************ 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:10.508 * Looking for test storage... 00:09:10.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:10.508 23:09:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:10.508 23:09:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62424 00:09:10.508 23:09:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:10.508 23:09:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62424 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62424 ']' 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.508 23:09:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:10.766 [2024-07-24 23:09:33.053499] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:10.766 [2024-07-24 23:09:33.053656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62424 ] 00:09:10.766 [2024-07-24 23:09:33.198669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.024 [2024-07-24 23:09:33.327827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.024 [2024-07-24 23:09:33.384161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:11.589 23:09:34 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.589 23:09:34 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:09:11.589 23:09:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:11.846 { 00:09:11.846 "version": "SPDK v24.09-pre git sha1 3c25cfe1d", 00:09:11.846 "fields": { 00:09:11.846 "major": 24, 00:09:11.846 "minor": 9, 00:09:11.846 "patch": 0, 00:09:11.846 "suffix": "-pre", 00:09:11.846 "commit": "3c25cfe1d" 00:09:11.846 } 00:09:11.846 } 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:11.846 23:09:34 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.846 23:09:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:11.846 23:09:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:11.846 23:09:34 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.103 23:09:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:12.103 23:09:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:12.103 23:09:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:12.103 23:09:34 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.360 request: 00:09:12.360 { 00:09:12.360 "method": "env_dpdk_get_mem_stats", 00:09:12.360 "req_id": 1 00:09:12.360 } 00:09:12.360 Got JSON-RPC error response 00:09:12.360 response: 00:09:12.360 { 00:09:12.360 "code": -32601, 00:09:12.360 "message": "Method not found" 00:09:12.360 } 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.360 23:09:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62424 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62424 ']' 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62424 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62424 00:09:12.360 killing process with pid 62424 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62424' 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@967 -- # kill 62424 00:09:12.360 23:09:34 app_cmdline -- common/autotest_common.sh@972 -- # wait 62424 00:09:12.617 00:09:12.617 real 0m2.166s 00:09:12.617 user 0m2.696s 00:09:12.617 sys 0m0.506s 00:09:12.617 ************************************ 00:09:12.617 END TEST app_cmdline 00:09:12.617 ************************************ 00:09:12.617 23:09:35 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.617 23:09:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.617 23:09:35 -- common/autotest_common.sh@1142 -- # return 0 00:09:12.617 23:09:35 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:12.617 23:09:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.617 23:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.617 23:09:35 -- common/autotest_common.sh@10 -- # set +x 00:09:12.617 ************************************ 00:09:12.617 START TEST version 00:09:12.617 ************************************ 00:09:12.617 23:09:35 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:12.875 * Looking for test storage... 00:09:12.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:12.875 23:09:35 version -- app/version.sh@17 -- # get_header_version major 00:09:12.875 23:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # cut -f2 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:09:12.875 23:09:35 version -- app/version.sh@17 -- # major=24 00:09:12.875 23:09:35 version -- app/version.sh@18 -- # get_header_version minor 00:09:12.875 23:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # cut -f2 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:09:12.875 23:09:35 version -- app/version.sh@18 -- # minor=9 00:09:12.875 23:09:35 version -- app/version.sh@19 -- # get_header_version patch 00:09:12.875 23:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # cut -f2 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:09:12.875 23:09:35 version -- app/version.sh@19 -- # patch=0 00:09:12.875 23:09:35 version -- app/version.sh@20 -- # get_header_version suffix 00:09:12.875 23:09:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # cut -f2 00:09:12.875 23:09:35 version -- app/version.sh@14 -- # tr -d '"' 00:09:12.875 23:09:35 version -- app/version.sh@20 -- # suffix=-pre 00:09:12.875 23:09:35 version -- app/version.sh@22 -- # version=24.9 00:09:12.875 23:09:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:12.875 23:09:35 version -- app/version.sh@28 -- # version=24.9rc0 00:09:12.875 23:09:35 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:12.875 23:09:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:12.875 23:09:35 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:12.875 23:09:35 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:12.875 00:09:12.875 real 0m0.173s 00:09:12.875 user 0m0.099s 00:09:12.875 sys 0m0.100s 00:09:12.875 ************************************ 00:09:12.875 END TEST version 00:09:12.875 ************************************ 00:09:12.875 23:09:35 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.875 23:09:35 version -- common/autotest_common.sh@10 -- # set +x 00:09:12.875 23:09:35 -- common/autotest_common.sh@1142 -- # return 0 00:09:12.875 23:09:35 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:09:12.875 23:09:35 -- spdk/autotest.sh@198 -- # uname -s 00:09:12.875 23:09:35 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:09:12.875 23:09:35 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:09:12.875 23:09:35 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:09:12.875 23:09:35 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:09:12.875 23:09:35 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:09:12.875 23:09:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:12.875 23:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.875 23:09:35 -- common/autotest_common.sh@10 -- # set +x 00:09:12.875 ************************************ 00:09:12.875 START TEST spdk_dd 00:09:12.875 ************************************ 00:09:12.876 23:09:35 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:09:13.134 * Looking for test storage... 00:09:13.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:13.134 23:09:35 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.134 23:09:35 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.134 23:09:35 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.134 23:09:35 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.134 23:09:35 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.134 23:09:35 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.134 23:09:35 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.134 23:09:35 spdk_dd -- paths/export.sh@5 -- # export PATH 00:09:13.134 23:09:35 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.134 23:09:35 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:13.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:13.393 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:13.393 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:13.393 23:09:35 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:09:13.393 23:09:35 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@230 -- # local class 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@232 -- # local progif 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@233 -- # class=01 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@15 -- # local i 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@24 -- # return 0 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@15 -- # local i 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@24 -- # return 0 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:09:13.393 23:09:35 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:13.393 23:09:35 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@139 -- # local lib so 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:09:13.393 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:09:13.394 * spdk_dd linked to liburing 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:09:13.394 23:09:35 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:13.394 23:09:35 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:13.395 23:09:35 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:09:13.395 23:09:35 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:09:13.395 23:09:35 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:09:13.395 23:09:35 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:09:13.395 23:09:35 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:09:13.395 23:09:35 spdk_dd -- dd/common.sh@157 -- # return 0 00:09:13.653 23:09:35 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:09:13.653 23:09:35 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:09:13.653 23:09:35 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:13.653 23:09:35 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.653 23:09:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:13.653 ************************************ 00:09:13.653 START TEST spdk_dd_basic_rw 00:09:13.653 ************************************ 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:09:13.653 * Looking for test storage... 00:09:13.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.653 23:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:09:13.654 23:09:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:09:13.914 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:09:13.914 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:13.915 ************************************ 00:09:13.915 START TEST dd_bs_lt_native_bs 00:09:13.915 ************************************ 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:13.915 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:09:13.915 { 00:09:13.915 "subsystems": [ 00:09:13.915 { 00:09:13.915 "subsystem": "bdev", 00:09:13.915 "config": [ 00:09:13.915 { 00:09:13.915 "params": { 00:09:13.915 "trtype": "pcie", 00:09:13.915 "traddr": "0000:00:10.0", 00:09:13.915 "name": "Nvme0" 00:09:13.915 }, 00:09:13.915 "method": "bdev_nvme_attach_controller" 00:09:13.915 }, 00:09:13.915 { 00:09:13.915 "method": "bdev_wait_for_examine" 00:09:13.915 } 00:09:13.915 ] 00:09:13.915 } 00:09:13.915 ] 00:09:13.915 } 00:09:13.915 [2024-07-24 23:09:36.217186] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:13.915 [2024-07-24 23:09:36.217501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:09:13.915 [2024-07-24 23:09:36.354211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.172 [2024-07-24 23:09:36.492767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.172 [2024-07-24 23:09:36.552718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:14.431 [2024-07-24 23:09:36.665013] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:09:14.431 [2024-07-24 23:09:36.665100] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:14.431 [2024-07-24 23:09:36.794788] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:09:14.431 ************************************ 00:09:14.431 END TEST dd_bs_lt_native_bs 00:09:14.431 ************************************ 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:14.431 00:09:14.431 real 0m0.743s 00:09:14.431 user 0m0.521s 00:09:14.431 sys 0m0.166s 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.431 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:14.690 ************************************ 00:09:14.690 START TEST dd_rw 00:09:14.690 ************************************ 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:14.690 23:09:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:15.257 23:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:09:15.257 23:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:15.257 23:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:15.257 23:09:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:15.257 [2024-07-24 23:09:37.668646] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:15.257 [2024-07-24 23:09:37.668723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62786 ] 00:09:15.257 { 00:09:15.257 "subsystems": [ 00:09:15.257 { 00:09:15.257 "subsystem": "bdev", 00:09:15.257 "config": [ 00:09:15.257 { 00:09:15.257 "params": { 00:09:15.257 "trtype": "pcie", 00:09:15.257 "traddr": "0000:00:10.0", 00:09:15.257 "name": "Nvme0" 00:09:15.257 }, 00:09:15.257 "method": "bdev_nvme_attach_controller" 00:09:15.257 }, 00:09:15.257 { 00:09:15.257 "method": "bdev_wait_for_examine" 00:09:15.257 } 00:09:15.257 ] 00:09:15.257 } 00:09:15.257 ] 00:09:15.257 } 00:09:15.516 [2024-07-24 23:09:37.802283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.516 [2024-07-24 23:09:37.913606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.516 [2024-07-24 23:09:37.967614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.032  Copying: 60/60 [kB] (average 19 MBps) 00:09:16.032 00:09:16.032 23:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:09:16.032 23:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:16.032 23:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:16.032 23:09:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:16.032 { 00:09:16.032 "subsystems": [ 00:09:16.032 { 00:09:16.032 "subsystem": "bdev", 00:09:16.032 "config": [ 00:09:16.032 { 00:09:16.032 "params": { 00:09:16.032 "trtype": "pcie", 00:09:16.032 "traddr": "0000:00:10.0", 00:09:16.032 "name": "Nvme0" 00:09:16.032 }, 00:09:16.032 "method": "bdev_nvme_attach_controller" 00:09:16.032 }, 00:09:16.032 { 00:09:16.032 "method": "bdev_wait_for_examine" 00:09:16.032 } 00:09:16.032 ] 00:09:16.032 } 00:09:16.032 ] 00:09:16.032 } 00:09:16.032 [2024-07-24 23:09:38.351421] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:16.032 [2024-07-24 23:09:38.351514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62794 ] 00:09:16.032 [2024-07-24 23:09:38.491355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.291 [2024-07-24 23:09:38.624781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.291 [2024-07-24 23:09:38.683270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.549  Copying: 60/60 [kB] (average 19 MBps) 00:09:16.549 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:16.549 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:16.808 { 00:09:16.808 "subsystems": [ 00:09:16.808 { 00:09:16.808 "subsystem": "bdev", 00:09:16.808 "config": [ 00:09:16.808 { 00:09:16.808 "params": { 00:09:16.808 "trtype": "pcie", 00:09:16.808 "traddr": "0000:00:10.0", 00:09:16.808 "name": "Nvme0" 00:09:16.808 }, 00:09:16.808 "method": "bdev_nvme_attach_controller" 00:09:16.808 }, 00:09:16.808 { 00:09:16.808 "method": "bdev_wait_for_examine" 00:09:16.808 } 00:09:16.808 ] 00:09:16.808 } 00:09:16.808 ] 00:09:16.808 } 00:09:16.808 [2024-07-24 23:09:39.079684] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:16.808 [2024-07-24 23:09:39.080394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62815 ] 00:09:16.808 [2024-07-24 23:09:39.228405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.066 [2024-07-24 23:09:39.344396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.066 [2024-07-24 23:09:39.403536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:17.323  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:17.323 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:17.323 23:09:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:18.258 23:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:09:18.258 23:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:18.258 23:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:18.258 23:09:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:18.258 { 00:09:18.258 "subsystems": [ 00:09:18.258 { 00:09:18.258 "subsystem": "bdev", 00:09:18.258 "config": [ 00:09:18.258 { 00:09:18.258 "params": { 00:09:18.258 "trtype": "pcie", 00:09:18.258 "traddr": "0000:00:10.0", 00:09:18.258 "name": "Nvme0" 00:09:18.258 }, 00:09:18.258 "method": "bdev_nvme_attach_controller" 00:09:18.258 }, 00:09:18.258 { 00:09:18.258 "method": "bdev_wait_for_examine" 00:09:18.258 } 00:09:18.258 ] 00:09:18.258 } 00:09:18.258 ] 00:09:18.258 } 00:09:18.258 [2024-07-24 23:09:40.494451] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:18.258 [2024-07-24 23:09:40.494553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62834 ] 00:09:18.258 [2024-07-24 23:09:40.638063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.516 [2024-07-24 23:09:40.769637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.516 [2024-07-24 23:09:40.827525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:18.775  Copying: 60/60 [kB] (average 58 MBps) 00:09:18.775 00:09:18.775 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:09:18.775 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:18.775 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:18.775 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:18.775 [2024-07-24 23:09:41.226570] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:18.775 [2024-07-24 23:09:41.226677] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62853 ] 00:09:18.775 { 00:09:18.775 "subsystems": [ 00:09:18.775 { 00:09:18.775 "subsystem": "bdev", 00:09:18.775 "config": [ 00:09:18.775 { 00:09:18.775 "params": { 00:09:18.775 "trtype": "pcie", 00:09:18.775 "traddr": "0000:00:10.0", 00:09:18.775 "name": "Nvme0" 00:09:18.775 }, 00:09:18.775 "method": "bdev_nvme_attach_controller" 00:09:18.775 }, 00:09:18.775 { 00:09:18.775 "method": "bdev_wait_for_examine" 00:09:18.775 } 00:09:18.775 ] 00:09:18.775 } 00:09:18.775 ] 00:09:18.775 } 00:09:19.033 [2024-07-24 23:09:41.364688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.033 [2024-07-24 23:09:41.501388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.293 [2024-07-24 23:09:41.560995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.551  Copying: 60/60 [kB] (average 58 MBps) 00:09:19.551 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:19.551 23:09:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:19.551 { 00:09:19.551 "subsystems": [ 00:09:19.551 { 00:09:19.551 "subsystem": "bdev", 00:09:19.551 "config": [ 00:09:19.551 { 00:09:19.551 "params": { 00:09:19.551 "trtype": "pcie", 00:09:19.551 "traddr": "0000:00:10.0", 00:09:19.551 "name": "Nvme0" 00:09:19.551 }, 00:09:19.551 "method": "bdev_nvme_attach_controller" 00:09:19.551 }, 00:09:19.551 { 00:09:19.551 "method": "bdev_wait_for_examine" 00:09:19.551 } 00:09:19.551 ] 00:09:19.551 } 00:09:19.551 ] 00:09:19.551 } 00:09:19.551 [2024-07-24 23:09:41.975195] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:19.551 [2024-07-24 23:09:41.975304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62874 ] 00:09:19.811 [2024-07-24 23:09:42.116967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.811 [2024-07-24 23:09:42.248349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.069 [2024-07-24 23:09:42.306469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:20.327  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:20.327 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:20.327 23:09:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:20.907 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:09:20.907 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:20.907 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:20.907 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:20.907 { 00:09:20.907 "subsystems": [ 00:09:20.907 { 00:09:20.907 "subsystem": "bdev", 00:09:20.907 "config": [ 00:09:20.907 { 00:09:20.908 "params": { 00:09:20.908 "trtype": "pcie", 00:09:20.908 "traddr": "0000:00:10.0", 00:09:20.908 "name": "Nvme0" 00:09:20.908 }, 00:09:20.908 "method": "bdev_nvme_attach_controller" 00:09:20.908 }, 00:09:20.908 { 00:09:20.908 "method": "bdev_wait_for_examine" 00:09:20.908 } 00:09:20.908 ] 00:09:20.908 } 00:09:20.908 ] 00:09:20.908 } 00:09:20.908 [2024-07-24 23:09:43.327388] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:20.908 [2024-07-24 23:09:43.327495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62893 ] 00:09:21.165 [2024-07-24 23:09:43.464126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.165 [2024-07-24 23:09:43.571619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.165 [2024-07-24 23:09:43.624613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:21.689  Copying: 56/56 [kB] (average 27 MBps) 00:09:21.689 00:09:21.689 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:09:21.689 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:21.689 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:21.689 23:09:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:21.689 [2024-07-24 23:09:44.001070] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:21.689 [2024-07-24 23:09:44.001178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62912 ] 00:09:21.689 { 00:09:21.689 "subsystems": [ 00:09:21.689 { 00:09:21.689 "subsystem": "bdev", 00:09:21.689 "config": [ 00:09:21.689 { 00:09:21.689 "params": { 00:09:21.689 "trtype": "pcie", 00:09:21.689 "traddr": "0000:00:10.0", 00:09:21.689 "name": "Nvme0" 00:09:21.689 }, 00:09:21.689 "method": "bdev_nvme_attach_controller" 00:09:21.689 }, 00:09:21.689 { 00:09:21.689 "method": "bdev_wait_for_examine" 00:09:21.689 } 00:09:21.689 ] 00:09:21.689 } 00:09:21.689 ] 00:09:21.689 } 00:09:21.689 [2024-07-24 23:09:44.138147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.948 [2024-07-24 23:09:44.248864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.948 [2024-07-24 23:09:44.302551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.207  Copying: 56/56 [kB] (average 27 MBps) 00:09:22.207 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:22.207 23:09:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:22.207 [2024-07-24 23:09:44.686315] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:22.207 [2024-07-24 23:09:44.686426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62922 ] 00:09:22.466 { 00:09:22.466 "subsystems": [ 00:09:22.466 { 00:09:22.466 "subsystem": "bdev", 00:09:22.466 "config": [ 00:09:22.466 { 00:09:22.466 "params": { 00:09:22.466 "trtype": "pcie", 00:09:22.466 "traddr": "0000:00:10.0", 00:09:22.466 "name": "Nvme0" 00:09:22.466 }, 00:09:22.466 "method": "bdev_nvme_attach_controller" 00:09:22.466 }, 00:09:22.466 { 00:09:22.466 "method": "bdev_wait_for_examine" 00:09:22.466 } 00:09:22.466 ] 00:09:22.466 } 00:09:22.466 ] 00:09:22.466 } 00:09:22.466 [2024-07-24 23:09:44.827505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.466 [2024-07-24 23:09:44.937054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.724 [2024-07-24 23:09:44.994052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:22.983  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:22.983 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:22.983 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:23.550 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:09:23.550 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:23.550 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:23.550 23:09:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:23.550 [2024-07-24 23:09:45.980595] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:23.550 [2024-07-24 23:09:45.980696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62952 ] 00:09:23.550 { 00:09:23.550 "subsystems": [ 00:09:23.550 { 00:09:23.550 "subsystem": "bdev", 00:09:23.550 "config": [ 00:09:23.550 { 00:09:23.550 "params": { 00:09:23.550 "trtype": "pcie", 00:09:23.550 "traddr": "0000:00:10.0", 00:09:23.550 "name": "Nvme0" 00:09:23.550 }, 00:09:23.550 "method": "bdev_nvme_attach_controller" 00:09:23.550 }, 00:09:23.550 { 00:09:23.550 "method": "bdev_wait_for_examine" 00:09:23.550 } 00:09:23.550 ] 00:09:23.550 } 00:09:23.550 ] 00:09:23.550 } 00:09:23.808 [2024-07-24 23:09:46.121470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.808 [2024-07-24 23:09:46.255993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.067 [2024-07-24 23:09:46.314912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:24.325  Copying: 56/56 [kB] (average 54 MBps) 00:09:24.325 00:09:24.325 23:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:24.325 23:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:09:24.325 23:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:24.325 23:09:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:24.325 [2024-07-24 23:09:46.714533] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:24.325 [2024-07-24 23:09:46.714624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62960 ] 00:09:24.325 { 00:09:24.325 "subsystems": [ 00:09:24.325 { 00:09:24.325 "subsystem": "bdev", 00:09:24.325 "config": [ 00:09:24.325 { 00:09:24.325 "params": { 00:09:24.325 "trtype": "pcie", 00:09:24.325 "traddr": "0000:00:10.0", 00:09:24.325 "name": "Nvme0" 00:09:24.325 }, 00:09:24.325 "method": "bdev_nvme_attach_controller" 00:09:24.325 }, 00:09:24.325 { 00:09:24.325 "method": "bdev_wait_for_examine" 00:09:24.325 } 00:09:24.325 ] 00:09:24.325 } 00:09:24.325 ] 00:09:24.325 } 00:09:24.584 [2024-07-24 23:09:46.855884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.584 [2024-07-24 23:09:46.978535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.584 [2024-07-24 23:09:47.034832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.101  Copying: 56/56 [kB] (average 54 MBps) 00:09:25.101 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:25.101 23:09:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:25.101 { 00:09:25.101 "subsystems": [ 00:09:25.101 { 00:09:25.101 "subsystem": "bdev", 00:09:25.101 "config": [ 00:09:25.101 { 00:09:25.101 "params": { 00:09:25.101 "trtype": "pcie", 00:09:25.101 "traddr": "0000:00:10.0", 00:09:25.101 "name": "Nvme0" 00:09:25.101 }, 00:09:25.101 "method": "bdev_nvme_attach_controller" 00:09:25.101 }, 00:09:25.101 { 00:09:25.101 "method": "bdev_wait_for_examine" 00:09:25.101 } 00:09:25.101 ] 00:09:25.101 } 00:09:25.101 ] 00:09:25.101 } 00:09:25.101 [2024-07-24 23:09:47.427680] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:25.101 [2024-07-24 23:09:47.427766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62981 ] 00:09:25.101 [2024-07-24 23:09:47.566945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.363 [2024-07-24 23:09:47.674102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.363 [2024-07-24 23:09:47.728201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:25.625  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:25.625 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:25.625 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:09:26.192 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:26.192 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:26.192 23:09:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 [2024-07-24 23:09:48.626964] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:26.192 [2024-07-24 23:09:48.627331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63000 ] 00:09:26.192 { 00:09:26.192 "subsystems": [ 00:09:26.192 { 00:09:26.192 "subsystem": "bdev", 00:09:26.192 "config": [ 00:09:26.192 { 00:09:26.192 "params": { 00:09:26.192 "trtype": "pcie", 00:09:26.192 "traddr": "0000:00:10.0", 00:09:26.192 "name": "Nvme0" 00:09:26.192 }, 00:09:26.192 "method": "bdev_nvme_attach_controller" 00:09:26.192 }, 00:09:26.192 { 00:09:26.192 "method": "bdev_wait_for_examine" 00:09:26.192 } 00:09:26.192 ] 00:09:26.192 } 00:09:26.192 ] 00:09:26.192 } 00:09:26.449 [2024-07-24 23:09:48.762295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.450 [2024-07-24 23:09:48.911936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.708 [2024-07-24 23:09:48.965180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:26.966  Copying: 48/48 [kB] (average 46 MBps) 00:09:26.966 00:09:26.966 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:09:26.966 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:26.966 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:26.966 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:26.966 [2024-07-24 23:09:49.344184] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:26.966 [2024-07-24 23:09:49.344284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63019 ] 00:09:26.966 { 00:09:26.966 "subsystems": [ 00:09:26.966 { 00:09:26.966 "subsystem": "bdev", 00:09:26.966 "config": [ 00:09:26.966 { 00:09:26.966 "params": { 00:09:26.966 "trtype": "pcie", 00:09:26.966 "traddr": "0000:00:10.0", 00:09:26.966 "name": "Nvme0" 00:09:26.966 }, 00:09:26.966 "method": "bdev_nvme_attach_controller" 00:09:26.966 }, 00:09:26.966 { 00:09:26.966 "method": "bdev_wait_for_examine" 00:09:26.966 } 00:09:26.966 ] 00:09:26.966 } 00:09:26.966 ] 00:09:26.966 } 00:09:27.224 [2024-07-24 23:09:49.481856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.224 [2024-07-24 23:09:49.590848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.224 [2024-07-24 23:09:49.645311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:27.741  Copying: 48/48 [kB] (average 23 MBps) 00:09:27.741 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:27.741 23:09:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:27.741 [2024-07-24 23:09:50.041102] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:27.741 [2024-07-24 23:09:50.041210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63035 ] 00:09:27.741 { 00:09:27.741 "subsystems": [ 00:09:27.741 { 00:09:27.741 "subsystem": "bdev", 00:09:27.741 "config": [ 00:09:27.741 { 00:09:27.741 "params": { 00:09:27.741 "trtype": "pcie", 00:09:27.741 "traddr": "0000:00:10.0", 00:09:27.741 "name": "Nvme0" 00:09:27.741 }, 00:09:27.741 "method": "bdev_nvme_attach_controller" 00:09:27.741 }, 00:09:27.741 { 00:09:27.741 "method": "bdev_wait_for_examine" 00:09:27.741 } 00:09:27.741 ] 00:09:27.741 } 00:09:27.741 ] 00:09:27.741 } 00:09:27.741 [2024-07-24 23:09:50.175268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.999 [2024-07-24 23:09:50.290656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.999 [2024-07-24 23:09:50.344949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:28.257  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:28.257 00:09:28.257 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:09:28.257 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:09:28.257 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:09:28.257 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:09:28.257 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:09:28.258 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:09:28.258 23:09:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:28.944 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:09:28.944 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:09:28.944 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:28.944 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:28.944 [2024-07-24 23:09:51.243372] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:28.945 [2024-07-24 23:09:51.243470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63059 ] 00:09:28.945 { 00:09:28.945 "subsystems": [ 00:09:28.945 { 00:09:28.945 "subsystem": "bdev", 00:09:28.945 "config": [ 00:09:28.945 { 00:09:28.945 "params": { 00:09:28.945 "trtype": "pcie", 00:09:28.945 "traddr": "0000:00:10.0", 00:09:28.945 "name": "Nvme0" 00:09:28.945 }, 00:09:28.945 "method": "bdev_nvme_attach_controller" 00:09:28.945 }, 00:09:28.945 { 00:09:28.945 "method": "bdev_wait_for_examine" 00:09:28.945 } 00:09:28.945 ] 00:09:28.945 } 00:09:28.945 ] 00:09:28.945 } 00:09:28.945 [2024-07-24 23:09:51.384214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.205 [2024-07-24 23:09:51.521615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.205 [2024-07-24 23:09:51.579129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:29.463  Copying: 48/48 [kB] (average 46 MBps) 00:09:29.463 00:09:29.463 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:09:29.463 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:09:29.463 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:29.463 23:09:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:29.722 { 00:09:29.722 "subsystems": [ 00:09:29.722 { 00:09:29.722 "subsystem": "bdev", 00:09:29.722 "config": [ 00:09:29.722 { 00:09:29.722 "params": { 00:09:29.722 "trtype": "pcie", 00:09:29.722 "traddr": "0000:00:10.0", 00:09:29.722 "name": "Nvme0" 00:09:29.722 }, 00:09:29.722 "method": "bdev_nvme_attach_controller" 00:09:29.722 }, 00:09:29.722 { 00:09:29.722 "method": "bdev_wait_for_examine" 00:09:29.722 } 00:09:29.722 ] 00:09:29.722 } 00:09:29.722 ] 00:09:29.722 } 00:09:29.722 [2024-07-24 23:09:51.971004] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:29.722 [2024-07-24 23:09:51.971102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63073 ] 00:09:29.722 [2024-07-24 23:09:52.109992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.980 [2024-07-24 23:09:52.231523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.980 [2024-07-24 23:09:52.288532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:30.240  Copying: 48/48 [kB] (average 46 MBps) 00:09:30.240 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:30.240 23:09:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:30.240 { 00:09:30.240 "subsystems": [ 00:09:30.240 { 00:09:30.240 "subsystem": "bdev", 00:09:30.240 "config": [ 00:09:30.240 { 00:09:30.240 "params": { 00:09:30.240 "trtype": "pcie", 00:09:30.240 "traddr": "0000:00:10.0", 00:09:30.240 "name": "Nvme0" 00:09:30.240 }, 00:09:30.240 "method": "bdev_nvme_attach_controller" 00:09:30.240 }, 00:09:30.240 { 00:09:30.240 "method": "bdev_wait_for_examine" 00:09:30.240 } 00:09:30.240 ] 00:09:30.240 } 00:09:30.240 ] 00:09:30.240 } 00:09:30.240 [2024-07-24 23:09:52.690303] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:30.240 [2024-07-24 23:09:52.690681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63088 ] 00:09:30.498 [2024-07-24 23:09:52.831088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.498 [2024-07-24 23:09:52.951467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.756 [2024-07-24 23:09:53.007933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.014  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:31.014 00:09:31.014 00:09:31.014 real 0m16.397s 00:09:31.014 user 0m12.267s 00:09:31.014 sys 0m5.652s 00:09:31.014 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.014 ************************************ 00:09:31.015 END TEST dd_rw 00:09:31.015 ************************************ 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:31.015 ************************************ 00:09:31.015 START TEST dd_rw_offset 00:09:31.015 ************************************ 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=a6hpbm0il9sbrixv6q1eyas2u0s52hlr80mhtr8isewsyluoo325son98iulet7sbw0flq3fu2dr308iomxcf3ucr4shzi8dyul0gl7zl7tf0g7k6h1gssu8i2ovoxviuhjfknslf63isp1va69pnhvnxto9tv6oq215lmv97q6zpq2hiw8mlfz4g7o1gz6smke1xm0vs4m1d03lnl6aehsdxa7kwt25t9udr30nernsw5o08byjw62fr5z27lzt1t50bifihfnd7vmfh93q8mvzredqellt29xsr647qo663d76lzh312y5rt2aiis5aainwky4bpm0kw03i13ov4pild4a8ly11wrusm9lbivulukrtjcct6qltqxvh0lblq1dog6nkq0oodu8bt12og3s4yjoi7i50z4zj1fla28alp10382ten2tnpfgxs3oogpo12v8g4uwn06a7lprqcyzfpo3xhuqjfqee3z4zan0ns8oeeggjosgm6f9zzdx49i8hk7hqa83sisdy7urafanipf9dzizc9y3l64lvd26nhx5vm7q4kh6jjyq555elobssyzg7khycpsacgrm87lbwl2fe0trw7th6pq4kgrktry9tydyegn75mrhv2c8z89ikb7mtwnjq9a94iu1gmz5mlbmv3ees2ozhkcmrhollsvc042qoaklyw69jr7netsvgv5xpz56hyce9jwwh3fyaz72vfw831693iht2xinabdjzydjp0azztd4bacvtjvmzkkepk21vkke9ope8c1tou5tbcyvglawxek2njm1cmknfi0b2u2wn1xomws0temh724b70en1ywkzfekrt21och1o7uwe6fhgqs0rupm0ifz3vi3pxapz469ogg5p59k491ffr67mj8lpj2jyt2lg4632lrk3spgial2gjnjujliy2qeg3k2m4qo745p4ghygdcminahl8lttplh4nglec1s3mf6vmwo940dvalj6qpat9kjdklrn6dyq8yu1t75dvptnre9yya6rp8ohql1jfzmryv2iv3d5m0aposxt853bzha1itpye2vytzdv0cse5ne73bhcyv6riugvouyx8bl6pk549a5bwnbefhh7meevh90y304zw7hzxhnofhg0pcmk0uqxkwsgwxui5a5pptnbj1kfk1gjx6g3cwqaus75uvvwo4ydftx2d0s9v59t61598526igepr6xez9uavbuvd5arg4m232al9o55eihdcea6hj4fs90xlzerj1j5u1syn0dtj9hhvb1eqz2banhn7serv85q11ro03squjc4t3qmv33gtbm229fjgm5bj3xajob5b163qz407u83ozagicv1joonwvly2fuhrhcwk42gpqk8pc59afm0i6njrxgn2ejxr0tphgokkyyc78c674sbx9upwsq3a9seam0zmx8dtxrbgxxuar73e0pmygzqc5zky0qaa7dzgujcz7t8lqy6k9jwgbn33zkag355ebji617iy5g4v5wl25rg3a70kn5yb5pngp07acvwpt2ztipjivsh6ag2a4bdlautxe9ofjdpwnkcj6sd71mruw8xmsrq1lngu1v3ax1edoxvzuv1hlinmatnu273i3wpctjz14xd4knsfbir0cu4vk6an6qpsx65ozqsnare1scg4eyd9k4e8ri9dldavsbwku23xlglibw1hrcs1z30z5srpycexj42o977wynnmn0aumxngyf7oskqmxfjjkk7io2qb4zszi5vce0u5aysu6iszrwqdchkfjfglehpdv8c6y61bjszxug0qsz9fzhrv8shxe12iijismqc12jyz5q9za4mqn9s8ilzw0f0mg4vg4ujd72fo2qn7wrvqqmnjsxzcdorcphp9j29n1fn6qzl5z9xdsh4hqxagbcnnha7jdxmouembszjbnsfhgwbdvvi0jpc0wcenyr3yrruccn016yo5s14eu1bhc7k772p2euc19eoypjovd88jhu6c4ultkfynfn77bmpo3nnxz8oxf9gnmiejqbbkmhuwm5udot7sm2e1oo5t88my56yod5lgn5ltg8nby7tywmx5alaanatmyrbucbob9et22kd2jfuvbgak4t6j9lvtfqhnbaxtzxr1ppi8kvlv2es4rjcubpv9wcqqsz8i2tju6gro20foyplbxy8386i6pwop2uua2qe0gvvkt33krt7aob185olzzb306fmceb7v4m4rihzazbngki7vrglkusdhccrshhb3nq81fym5j5fzytagaywtsk3yqbvyn35zvwd6kbrg9scx01sljqguxhmj7auhowknj4pk54w6nh36eo7aebm292t0vdfv6a224jdn8jke8vrfntor7vcyo2vwmuiktv0do0vsnly96s6vkob0rsegi9at83sfh89vwkkngeh9ej0v02ls8na5pa8e1pli0uc1x1c4gzagmke16bzsz1a39fmpqy2yai9vv9dm6j2yl2mgjf5kqd3w3qwm94wc9ujg7omufjp32wjdt65lokme5svlpmcppzo6e1phnksfu1qy662roullzxzwi2kk4y9h6hzqgoqpbv6djgkqdb3hjh0hhqajdrdbwoce2hnjb928yjk437lifnlxcbxzsn6cceindhkc5ccszz7t2zd43xp07akomsr6ttovieav4biomwye4kqu2hss6cn3r591mz4trbi45cknlyyhf0vhq1t1lt15hagh8at2xtjwqpwb8vl40qzbcclum5g32olvr80g1s44j9o5l2hc31k2aqb1xc3qumuudqub5khw8nlt72rcn0fxzzk73b35u3expcv4n1ddinjca5hsresmyqwh51amvqdm6sd1ek08hlxbkr6r3to5whhdn5tazpv5rk20qinhx5r0qckqfnberkflld4vjlit13e51qnmm8i8ert3fxeg4fpb0s8lhcnr5rgzzj7zqurkbqtn77dcfpiqjaalbqby0fo3yix3pv00yqdwu40io6rsv6qufkqe7qo2eossahn6j4j776c0ktrjf1vtyb7b5wbipb3heqfxlc6270v6xz1ty0hg28jpl59kojm0c8bon68zg8mvl65jl7phpwimsr9tju7d86jmdw12fg8qa531pkkdkyjkvu7u5ex6qyanlj5c8g28o8m21ak9dr8nma2zpthx1jrxvk1sh7dtwcan4q5byo0rnjwl22ddh1ni8gn8wmzbxdcq1vet31ge9dnsgeqyyrzy26rsvrhviqcina47sg8j4d03w28srr5mnj7p052ikfuig4wtorcbi1erpktv5ksm0qbsbmzi4oqhc863ce5l1wo6tk6lr43mclq64wtkq4zuzu22ljg9e8tlipjws6nj0fikdct938qrjtgzov196end1tvypum4ojwu2o5depr4ae8tc60vox6sey38d2k6pi5r1mxkuyebm4x8nrv9t7vzpr6fdaow0hxzkjhuoelwqms84182v4q8yufh5n9jg3vr5eln1g4j6qdn5ehvs99bu2te764gkv3vbw1wh9rdpuws84tg6mkc1vpvpzg5yv47g18s2schww1dwvjfardtm5qo2vjc82rdhpmgieofczykg3mpxmaqr7zxbmh3c385r0jvwfo5rn505fkk18ywbn2u8uoq4fudgzf83miyudcyf25nq0g7tro4g65u5fx8wc13ckdei0ziy59v1kr5s4fmi5digu7snrjdp2lg4z26v9dcxj3lsfuua0bp0d2s29cqd4ymqmh794x73gudws03p3kba4glagr8r7h37yxtxj0hhzout1xkotgk0x8r1fgm1l6s9b92r2cctfzmrp3xfbzefk3c522bztl6t2j23umqk2b9ivijstaneeqm40fjf12oao9nlfbvlhdi1ol6qdxnslv1yvhf9fxtzs53w59j0ldp0p4caagn9nugjer6yi6noq6inpci7oza9cuauvs235n3pp7vy2hkl5vnewd26ncwhhfudq4a6go2282dbwwsloc6kcv0wf5gbq 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:31.015 23:09:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:31.272 { 00:09:31.272 "subsystems": [ 00:09:31.272 { 00:09:31.272 "subsystem": "bdev", 00:09:31.272 "config": [ 00:09:31.272 { 00:09:31.272 "params": { 00:09:31.272 "trtype": "pcie", 00:09:31.272 "traddr": "0000:00:10.0", 00:09:31.272 "name": "Nvme0" 00:09:31.272 }, 00:09:31.272 "method": "bdev_nvme_attach_controller" 00:09:31.272 }, 00:09:31.272 { 00:09:31.272 "method": "bdev_wait_for_examine" 00:09:31.273 } 00:09:31.273 ] 00:09:31.273 } 00:09:31.273 ] 00:09:31.273 } 00:09:31.273 [2024-07-24 23:09:53.504052] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:31.273 [2024-07-24 23:09:53.504162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:09:31.273 [2024-07-24 23:09:53.642673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.531 [2024-07-24 23:09:53.785226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.531 [2024-07-24 23:09:53.842295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.790  Copying: 4096/4096 [B] (average 4000 kBps) 00:09:31.790 00:09:31.790 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:09:31.790 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:09:31.790 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:09:31.790 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:31.790 [2024-07-24 23:09:54.230360] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:31.790 [2024-07-24 23:09:54.230463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63138 ] 00:09:31.790 { 00:09:31.790 "subsystems": [ 00:09:31.790 { 00:09:31.790 "subsystem": "bdev", 00:09:31.790 "config": [ 00:09:31.790 { 00:09:31.790 "params": { 00:09:31.790 "trtype": "pcie", 00:09:31.790 "traddr": "0000:00:10.0", 00:09:31.790 "name": "Nvme0" 00:09:31.790 }, 00:09:31.790 "method": "bdev_nvme_attach_controller" 00:09:31.790 }, 00:09:31.790 { 00:09:31.790 "method": "bdev_wait_for_examine" 00:09:31.790 } 00:09:31.790 ] 00:09:31.790 } 00:09:31.790 ] 00:09:31.790 } 00:09:32.047 [2024-07-24 23:09:54.371972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.047 [2024-07-24 23:09:54.491985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.305 [2024-07-24 23:09:54.551382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.565  Copying: 4096/4096 [B] (average 4000 kBps) 00:09:32.565 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:09:32.565 ************************************ 00:09:32.565 END TEST dd_rw_offset 00:09:32.565 ************************************ 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ a6hpbm0il9sbrixv6q1eyas2u0s52hlr80mhtr8isewsyluoo325son98iulet7sbw0flq3fu2dr308iomxcf3ucr4shzi8dyul0gl7zl7tf0g7k6h1gssu8i2ovoxviuhjfknslf63isp1va69pnhvnxto9tv6oq215lmv97q6zpq2hiw8mlfz4g7o1gz6smke1xm0vs4m1d03lnl6aehsdxa7kwt25t9udr30nernsw5o08byjw62fr5z27lzt1t50bifihfnd7vmfh93q8mvzredqellt29xsr647qo663d76lzh312y5rt2aiis5aainwky4bpm0kw03i13ov4pild4a8ly11wrusm9lbivulukrtjcct6qltqxvh0lblq1dog6nkq0oodu8bt12og3s4yjoi7i50z4zj1fla28alp10382ten2tnpfgxs3oogpo12v8g4uwn06a7lprqcyzfpo3xhuqjfqee3z4zan0ns8oeeggjosgm6f9zzdx49i8hk7hqa83sisdy7urafanipf9dzizc9y3l64lvd26nhx5vm7q4kh6jjyq555elobssyzg7khycpsacgrm87lbwl2fe0trw7th6pq4kgrktry9tydyegn75mrhv2c8z89ikb7mtwnjq9a94iu1gmz5mlbmv3ees2ozhkcmrhollsvc042qoaklyw69jr7netsvgv5xpz56hyce9jwwh3fyaz72vfw831693iht2xinabdjzydjp0azztd4bacvtjvmzkkepk21vkke9ope8c1tou5tbcyvglawxek2njm1cmknfi0b2u2wn1xomws0temh724b70en1ywkzfekrt21och1o7uwe6fhgqs0rupm0ifz3vi3pxapz469ogg5p59k491ffr67mj8lpj2jyt2lg4632lrk3spgial2gjnjujliy2qeg3k2m4qo745p4ghygdcminahl8lttplh4nglec1s3mf6vmwo940dvalj6qpat9kjdklrn6dyq8yu1t75dvptnre9yya6rp8ohql1jfzmryv2iv3d5m0aposxt853bzha1itpye2vytzdv0cse5ne73bhcyv6riugvouyx8bl6pk549a5bwnbefhh7meevh90y304zw7hzxhnofhg0pcmk0uqxkwsgwxui5a5pptnbj1kfk1gjx6g3cwqaus75uvvwo4ydftx2d0s9v59t61598526igepr6xez9uavbuvd5arg4m232al9o55eihdcea6hj4fs90xlzerj1j5u1syn0dtj9hhvb1eqz2banhn7serv85q11ro03squjc4t3qmv33gtbm229fjgm5bj3xajob5b163qz407u83ozagicv1joonwvly2fuhrhcwk42gpqk8pc59afm0i6njrxgn2ejxr0tphgokkyyc78c674sbx9upwsq3a9seam0zmx8dtxrbgxxuar73e0pmygzqc5zky0qaa7dzgujcz7t8lqy6k9jwgbn33zkag355ebji617iy5g4v5wl25rg3a70kn5yb5pngp07acvwpt2ztipjivsh6ag2a4bdlautxe9ofjdpwnkcj6sd71mruw8xmsrq1lngu1v3ax1edoxvzuv1hlinmatnu273i3wpctjz14xd4knsfbir0cu4vk6an6qpsx65ozqsnare1scg4eyd9k4e8ri9dldavsbwku23xlglibw1hrcs1z30z5srpycexj42o977wynnmn0aumxngyf7oskqmxfjjkk7io2qb4zszi5vce0u5aysu6iszrwqdchkfjfglehpdv8c6y61bjszxug0qsz9fzhrv8shxe12iijismqc12jyz5q9za4mqn9s8ilzw0f0mg4vg4ujd72fo2qn7wrvqqmnjsxzcdorcphp9j29n1fn6qzl5z9xdsh4hqxagbcnnha7jdxmouembszjbnsfhgwbdvvi0jpc0wcenyr3yrruccn016yo5s14eu1bhc7k772p2euc19eoypjovd88jhu6c4ultkfynfn77bmpo3nnxz8oxf9gnmiejqbbkmhuwm5udot7sm2e1oo5t88my56yod5lgn5ltg8nby7tywmx5alaanatmyrbucbob9et22kd2jfuvbgak4t6j9lvtfqhnbaxtzxr1ppi8kvlv2es4rjcubpv9wcqqsz8i2tju6gro20foyplbxy8386i6pwop2uua2qe0gvvkt33krt7aob185olzzb306fmceb7v4m4rihzazbngki7vrglkusdhccrshhb3nq81fym5j5fzytagaywtsk3yqbvyn35zvwd6kbrg9scx01sljqguxhmj7auhowknj4pk54w6nh36eo7aebm292t0vdfv6a224jdn8jke8vrfntor7vcyo2vwmuiktv0do0vsnly96s6vkob0rsegi9at83sfh89vwkkngeh9ej0v02ls8na5pa8e1pli0uc1x1c4gzagmke16bzsz1a39fmpqy2yai9vv9dm6j2yl2mgjf5kqd3w3qwm94wc9ujg7omufjp32wjdt65lokme5svlpmcppzo6e1phnksfu1qy662roullzxzwi2kk4y9h6hzqgoqpbv6djgkqdb3hjh0hhqajdrdbwoce2hnjb928yjk437lifnlxcbxzsn6cceindhkc5ccszz7t2zd43xp07akomsr6ttovieav4biomwye4kqu2hss6cn3r591mz4trbi45cknlyyhf0vhq1t1lt15hagh8at2xtjwqpwb8vl40qzbcclum5g32olvr80g1s44j9o5l2hc31k2aqb1xc3qumuudqub5khw8nlt72rcn0fxzzk73b35u3expcv4n1ddinjca5hsresmyqwh51amvqdm6sd1ek08hlxbkr6r3to5whhdn5tazpv5rk20qinhx5r0qckqfnberkflld4vjlit13e51qnmm8i8ert3fxeg4fpb0s8lhcnr5rgzzj7zqurkbqtn77dcfpiqjaalbqby0fo3yix3pv00yqdwu40io6rsv6qufkqe7qo2eossahn6j4j776c0ktrjf1vtyb7b5wbipb3heqfxlc6270v6xz1ty0hg28jpl59kojm0c8bon68zg8mvl65jl7phpwimsr9tju7d86jmdw12fg8qa531pkkdkyjkvu7u5ex6qyanlj5c8g28o8m21ak9dr8nma2zpthx1jrxvk1sh7dtwcan4q5byo0rnjwl22ddh1ni8gn8wmzbxdcq1vet31ge9dnsgeqyyrzy26rsvrhviqcina47sg8j4d03w28srr5mnj7p052ikfuig4wtorcbi1erpktv5ksm0qbsbmzi4oqhc863ce5l1wo6tk6lr43mclq64wtkq4zuzu22ljg9e8tlipjws6nj0fikdct938qrjtgzov196end1tvypum4ojwu2o5depr4ae8tc60vox6sey38d2k6pi5r1mxkuyebm4x8nrv9t7vzpr6fdaow0hxzkjhuoelwqms84182v4q8yufh5n9jg3vr5eln1g4j6qdn5ehvs99bu2te764gkv3vbw1wh9rdpuws84tg6mkc1vpvpzg5yv47g18s2schww1dwvjfardtm5qo2vjc82rdhpmgieofczykg3mpxmaqr7zxbmh3c385r0jvwfo5rn505fkk18ywbn2u8uoq4fudgzf83miyudcyf25nq0g7tro4g65u5fx8wc13ckdei0ziy59v1kr5s4fmi5digu7snrjdp2lg4z26v9dcxj3lsfuua0bp0d2s29cqd4ymqmh794x73gudws03p3kba4glagr8r7h37yxtxj0hhzout1xkotgk0x8r1fgm1l6s9b92r2cctfzmrp3xfbzefk3c522bztl6t2j23umqk2b9ivijstaneeqm40fjf12oao9nlfbvlhdi1ol6qdxnslv1yvhf9fxtzs53w59j0ldp0p4caagn9nugjer6yi6noq6inpci7oza9cuauvs235n3pp7vy2hkl5vnewd26ncwhhfudq4a6go2282dbwwsloc6kcv0wf5gbq == \a\6\h\p\b\m\0\i\l\9\s\b\r\i\x\v\6\q\1\e\y\a\s\2\u\0\s\5\2\h\l\r\8\0\m\h\t\r\8\i\s\e\w\s\y\l\u\o\o\3\2\5\s\o\n\9\8\i\u\l\e\t\7\s\b\w\0\f\l\q\3\f\u\2\d\r\3\0\8\i\o\m\x\c\f\3\u\c\r\4\s\h\z\i\8\d\y\u\l\0\g\l\7\z\l\7\t\f\0\g\7\k\6\h\1\g\s\s\u\8\i\2\o\v\o\x\v\i\u\h\j\f\k\n\s\l\f\6\3\i\s\p\1\v\a\6\9\p\n\h\v\n\x\t\o\9\t\v\6\o\q\2\1\5\l\m\v\9\7\q\6\z\p\q\2\h\i\w\8\m\l\f\z\4\g\7\o\1\g\z\6\s\m\k\e\1\x\m\0\v\s\4\m\1\d\0\3\l\n\l\6\a\e\h\s\d\x\a\7\k\w\t\2\5\t\9\u\d\r\3\0\n\e\r\n\s\w\5\o\0\8\b\y\j\w\6\2\f\r\5\z\2\7\l\z\t\1\t\5\0\b\i\f\i\h\f\n\d\7\v\m\f\h\9\3\q\8\m\v\z\r\e\d\q\e\l\l\t\2\9\x\s\r\6\4\7\q\o\6\6\3\d\7\6\l\z\h\3\1\2\y\5\r\t\2\a\i\i\s\5\a\a\i\n\w\k\y\4\b\p\m\0\k\w\0\3\i\1\3\o\v\4\p\i\l\d\4\a\8\l\y\1\1\w\r\u\s\m\9\l\b\i\v\u\l\u\k\r\t\j\c\c\t\6\q\l\t\q\x\v\h\0\l\b\l\q\1\d\o\g\6\n\k\q\0\o\o\d\u\8\b\t\1\2\o\g\3\s\4\y\j\o\i\7\i\5\0\z\4\z\j\1\f\l\a\2\8\a\l\p\1\0\3\8\2\t\e\n\2\t\n\p\f\g\x\s\3\o\o\g\p\o\1\2\v\8\g\4\u\w\n\0\6\a\7\l\p\r\q\c\y\z\f\p\o\3\x\h\u\q\j\f\q\e\e\3\z\4\z\a\n\0\n\s\8\o\e\e\g\g\j\o\s\g\m\6\f\9\z\z\d\x\4\9\i\8\h\k\7\h\q\a\8\3\s\i\s\d\y\7\u\r\a\f\a\n\i\p\f\9\d\z\i\z\c\9\y\3\l\6\4\l\v\d\2\6\n\h\x\5\v\m\7\q\4\k\h\6\j\j\y\q\5\5\5\e\l\o\b\s\s\y\z\g\7\k\h\y\c\p\s\a\c\g\r\m\8\7\l\b\w\l\2\f\e\0\t\r\w\7\t\h\6\p\q\4\k\g\r\k\t\r\y\9\t\y\d\y\e\g\n\7\5\m\r\h\v\2\c\8\z\8\9\i\k\b\7\m\t\w\n\j\q\9\a\9\4\i\u\1\g\m\z\5\m\l\b\m\v\3\e\e\s\2\o\z\h\k\c\m\r\h\o\l\l\s\v\c\0\4\2\q\o\a\k\l\y\w\6\9\j\r\7\n\e\t\s\v\g\v\5\x\p\z\5\6\h\y\c\e\9\j\w\w\h\3\f\y\a\z\7\2\v\f\w\8\3\1\6\9\3\i\h\t\2\x\i\n\a\b\d\j\z\y\d\j\p\0\a\z\z\t\d\4\b\a\c\v\t\j\v\m\z\k\k\e\p\k\2\1\v\k\k\e\9\o\p\e\8\c\1\t\o\u\5\t\b\c\y\v\g\l\a\w\x\e\k\2\n\j\m\1\c\m\k\n\f\i\0\b\2\u\2\w\n\1\x\o\m\w\s\0\t\e\m\h\7\2\4\b\7\0\e\n\1\y\w\k\z\f\e\k\r\t\2\1\o\c\h\1\o\7\u\w\e\6\f\h\g\q\s\0\r\u\p\m\0\i\f\z\3\v\i\3\p\x\a\p\z\4\6\9\o\g\g\5\p\5\9\k\4\9\1\f\f\r\6\7\m\j\8\l\p\j\2\j\y\t\2\l\g\4\6\3\2\l\r\k\3\s\p\g\i\a\l\2\g\j\n\j\u\j\l\i\y\2\q\e\g\3\k\2\m\4\q\o\7\4\5\p\4\g\h\y\g\d\c\m\i\n\a\h\l\8\l\t\t\p\l\h\4\n\g\l\e\c\1\s\3\m\f\6\v\m\w\o\9\4\0\d\v\a\l\j\6\q\p\a\t\9\k\j\d\k\l\r\n\6\d\y\q\8\y\u\1\t\7\5\d\v\p\t\n\r\e\9\y\y\a\6\r\p\8\o\h\q\l\1\j\f\z\m\r\y\v\2\i\v\3\d\5\m\0\a\p\o\s\x\t\8\5\3\b\z\h\a\1\i\t\p\y\e\2\v\y\t\z\d\v\0\c\s\e\5\n\e\7\3\b\h\c\y\v\6\r\i\u\g\v\o\u\y\x\8\b\l\6\p\k\5\4\9\a\5\b\w\n\b\e\f\h\h\7\m\e\e\v\h\9\0\y\3\0\4\z\w\7\h\z\x\h\n\o\f\h\g\0\p\c\m\k\0\u\q\x\k\w\s\g\w\x\u\i\5\a\5\p\p\t\n\b\j\1\k\f\k\1\g\j\x\6\g\3\c\w\q\a\u\s\7\5\u\v\v\w\o\4\y\d\f\t\x\2\d\0\s\9\v\5\9\t\6\1\5\9\8\5\2\6\i\g\e\p\r\6\x\e\z\9\u\a\v\b\u\v\d\5\a\r\g\4\m\2\3\2\a\l\9\o\5\5\e\i\h\d\c\e\a\6\h\j\4\f\s\9\0\x\l\z\e\r\j\1\j\5\u\1\s\y\n\0\d\t\j\9\h\h\v\b\1\e\q\z\2\b\a\n\h\n\7\s\e\r\v\8\5\q\1\1\r\o\0\3\s\q\u\j\c\4\t\3\q\m\v\3\3\g\t\b\m\2\2\9\f\j\g\m\5\b\j\3\x\a\j\o\b\5\b\1\6\3\q\z\4\0\7\u\8\3\o\z\a\g\i\c\v\1\j\o\o\n\w\v\l\y\2\f\u\h\r\h\c\w\k\4\2\g\p\q\k\8\p\c\5\9\a\f\m\0\i\6\n\j\r\x\g\n\2\e\j\x\r\0\t\p\h\g\o\k\k\y\y\c\7\8\c\6\7\4\s\b\x\9\u\p\w\s\q\3\a\9\s\e\a\m\0\z\m\x\8\d\t\x\r\b\g\x\x\u\a\r\7\3\e\0\p\m\y\g\z\q\c\5\z\k\y\0\q\a\a\7\d\z\g\u\j\c\z\7\t\8\l\q\y\6\k\9\j\w\g\b\n\3\3\z\k\a\g\3\5\5\e\b\j\i\6\1\7\i\y\5\g\4\v\5\w\l\2\5\r\g\3\a\7\0\k\n\5\y\b\5\p\n\g\p\0\7\a\c\v\w\p\t\2\z\t\i\p\j\i\v\s\h\6\a\g\2\a\4\b\d\l\a\u\t\x\e\9\o\f\j\d\p\w\n\k\c\j\6\s\d\7\1\m\r\u\w\8\x\m\s\r\q\1\l\n\g\u\1\v\3\a\x\1\e\d\o\x\v\z\u\v\1\h\l\i\n\m\a\t\n\u\2\7\3\i\3\w\p\c\t\j\z\1\4\x\d\4\k\n\s\f\b\i\r\0\c\u\4\v\k\6\a\n\6\q\p\s\x\6\5\o\z\q\s\n\a\r\e\1\s\c\g\4\e\y\d\9\k\4\e\8\r\i\9\d\l\d\a\v\s\b\w\k\u\2\3\x\l\g\l\i\b\w\1\h\r\c\s\1\z\3\0\z\5\s\r\p\y\c\e\x\j\4\2\o\9\7\7\w\y\n\n\m\n\0\a\u\m\x\n\g\y\f\7\o\s\k\q\m\x\f\j\j\k\k\7\i\o\2\q\b\4\z\s\z\i\5\v\c\e\0\u\5\a\y\s\u\6\i\s\z\r\w\q\d\c\h\k\f\j\f\g\l\e\h\p\d\v\8\c\6\y\6\1\b\j\s\z\x\u\g\0\q\s\z\9\f\z\h\r\v\8\s\h\x\e\1\2\i\i\j\i\s\m\q\c\1\2\j\y\z\5\q\9\z\a\4\m\q\n\9\s\8\i\l\z\w\0\f\0\m\g\4\v\g\4\u\j\d\7\2\f\o\2\q\n\7\w\r\v\q\q\m\n\j\s\x\z\c\d\o\r\c\p\h\p\9\j\2\9\n\1\f\n\6\q\z\l\5\z\9\x\d\s\h\4\h\q\x\a\g\b\c\n\n\h\a\7\j\d\x\m\o\u\e\m\b\s\z\j\b\n\s\f\h\g\w\b\d\v\v\i\0\j\p\c\0\w\c\e\n\y\r\3\y\r\r\u\c\c\n\0\1\6\y\o\5\s\1\4\e\u\1\b\h\c\7\k\7\7\2\p\2\e\u\c\1\9\e\o\y\p\j\o\v\d\8\8\j\h\u\6\c\4\u\l\t\k\f\y\n\f\n\7\7\b\m\p\o\3\n\n\x\z\8\o\x\f\9\g\n\m\i\e\j\q\b\b\k\m\h\u\w\m\5\u\d\o\t\7\s\m\2\e\1\o\o\5\t\8\8\m\y\5\6\y\o\d\5\l\g\n\5\l\t\g\8\n\b\y\7\t\y\w\m\x\5\a\l\a\a\n\a\t\m\y\r\b\u\c\b\o\b\9\e\t\2\2\k\d\2\j\f\u\v\b\g\a\k\4\t\6\j\9\l\v\t\f\q\h\n\b\a\x\t\z\x\r\1\p\p\i\8\k\v\l\v\2\e\s\4\r\j\c\u\b\p\v\9\w\c\q\q\s\z\8\i\2\t\j\u\6\g\r\o\2\0\f\o\y\p\l\b\x\y\8\3\8\6\i\6\p\w\o\p\2\u\u\a\2\q\e\0\g\v\v\k\t\3\3\k\r\t\7\a\o\b\1\8\5\o\l\z\z\b\3\0\6\f\m\c\e\b\7\v\4\m\4\r\i\h\z\a\z\b\n\g\k\i\7\v\r\g\l\k\u\s\d\h\c\c\r\s\h\h\b\3\n\q\8\1\f\y\m\5\j\5\f\z\y\t\a\g\a\y\w\t\s\k\3\y\q\b\v\y\n\3\5\z\v\w\d\6\k\b\r\g\9\s\c\x\0\1\s\l\j\q\g\u\x\h\m\j\7\a\u\h\o\w\k\n\j\4\p\k\5\4\w\6\n\h\3\6\e\o\7\a\e\b\m\2\9\2\t\0\v\d\f\v\6\a\2\2\4\j\d\n\8\j\k\e\8\v\r\f\n\t\o\r\7\v\c\y\o\2\v\w\m\u\i\k\t\v\0\d\o\0\v\s\n\l\y\9\6\s\6\v\k\o\b\0\r\s\e\g\i\9\a\t\8\3\s\f\h\8\9\v\w\k\k\n\g\e\h\9\e\j\0\v\0\2\l\s\8\n\a\5\p\a\8\e\1\p\l\i\0\u\c\1\x\1\c\4\g\z\a\g\m\k\e\1\6\b\z\s\z\1\a\3\9\f\m\p\q\y\2\y\a\i\9\v\v\9\d\m\6\j\2\y\l\2\m\g\j\f\5\k\q\d\3\w\3\q\w\m\9\4\w\c\9\u\j\g\7\o\m\u\f\j\p\3\2\w\j\d\t\6\5\l\o\k\m\e\5\s\v\l\p\m\c\p\p\z\o\6\e\1\p\h\n\k\s\f\u\1\q\y\6\6\2\r\o\u\l\l\z\x\z\w\i\2\k\k\4\y\9\h\6\h\z\q\g\o\q\p\b\v\6\d\j\g\k\q\d\b\3\h\j\h\0\h\h\q\a\j\d\r\d\b\w\o\c\e\2\h\n\j\b\9\2\8\y\j\k\4\3\7\l\i\f\n\l\x\c\b\x\z\s\n\6\c\c\e\i\n\d\h\k\c\5\c\c\s\z\z\7\t\2\z\d\4\3\x\p\0\7\a\k\o\m\s\r\6\t\t\o\v\i\e\a\v\4\b\i\o\m\w\y\e\4\k\q\u\2\h\s\s\6\c\n\3\r\5\9\1\m\z\4\t\r\b\i\4\5\c\k\n\l\y\y\h\f\0\v\h\q\1\t\1\l\t\1\5\h\a\g\h\8\a\t\2\x\t\j\w\q\p\w\b\8\v\l\4\0\q\z\b\c\c\l\u\m\5\g\3\2\o\l\v\r\8\0\g\1\s\4\4\j\9\o\5\l\2\h\c\3\1\k\2\a\q\b\1\x\c\3\q\u\m\u\u\d\q\u\b\5\k\h\w\8\n\l\t\7\2\r\c\n\0\f\x\z\z\k\7\3\b\3\5\u\3\e\x\p\c\v\4\n\1\d\d\i\n\j\c\a\5\h\s\r\e\s\m\y\q\w\h\5\1\a\m\v\q\d\m\6\s\d\1\e\k\0\8\h\l\x\b\k\r\6\r\3\t\o\5\w\h\h\d\n\5\t\a\z\p\v\5\r\k\2\0\q\i\n\h\x\5\r\0\q\c\k\q\f\n\b\e\r\k\f\l\l\d\4\v\j\l\i\t\1\3\e\5\1\q\n\m\m\8\i\8\e\r\t\3\f\x\e\g\4\f\p\b\0\s\8\l\h\c\n\r\5\r\g\z\z\j\7\z\q\u\r\k\b\q\t\n\7\7\d\c\f\p\i\q\j\a\a\l\b\q\b\y\0\f\o\3\y\i\x\3\p\v\0\0\y\q\d\w\u\4\0\i\o\6\r\s\v\6\q\u\f\k\q\e\7\q\o\2\e\o\s\s\a\h\n\6\j\4\j\7\7\6\c\0\k\t\r\j\f\1\v\t\y\b\7\b\5\w\b\i\p\b\3\h\e\q\f\x\l\c\6\2\7\0\v\6\x\z\1\t\y\0\h\g\2\8\j\p\l\5\9\k\o\j\m\0\c\8\b\o\n\6\8\z\g\8\m\v\l\6\5\j\l\7\p\h\p\w\i\m\s\r\9\t\j\u\7\d\8\6\j\m\d\w\1\2\f\g\8\q\a\5\3\1\p\k\k\d\k\y\j\k\v\u\7\u\5\e\x\6\q\y\a\n\l\j\5\c\8\g\2\8\o\8\m\2\1\a\k\9\d\r\8\n\m\a\2\z\p\t\h\x\1\j\r\x\v\k\1\s\h\7\d\t\w\c\a\n\4\q\5\b\y\o\0\r\n\j\w\l\2\2\d\d\h\1\n\i\8\g\n\8\w\m\z\b\x\d\c\q\1\v\e\t\3\1\g\e\9\d\n\s\g\e\q\y\y\r\z\y\2\6\r\s\v\r\h\v\i\q\c\i\n\a\4\7\s\g\8\j\4\d\0\3\w\2\8\s\r\r\5\m\n\j\7\p\0\5\2\i\k\f\u\i\g\4\w\t\o\r\c\b\i\1\e\r\p\k\t\v\5\k\s\m\0\q\b\s\b\m\z\i\4\o\q\h\c\8\6\3\c\e\5\l\1\w\o\6\t\k\6\l\r\4\3\m\c\l\q\6\4\w\t\k\q\4\z\u\z\u\2\2\l\j\g\9\e\8\t\l\i\p\j\w\s\6\n\j\0\f\i\k\d\c\t\9\3\8\q\r\j\t\g\z\o\v\1\9\6\e\n\d\1\t\v\y\p\u\m\4\o\j\w\u\2\o\5\d\e\p\r\4\a\e\8\t\c\6\0\v\o\x\6\s\e\y\3\8\d\2\k\6\p\i\5\r\1\m\x\k\u\y\e\b\m\4\x\8\n\r\v\9\t\7\v\z\p\r\6\f\d\a\o\w\0\h\x\z\k\j\h\u\o\e\l\w\q\m\s\8\4\1\8\2\v\4\q\8\y\u\f\h\5\n\9\j\g\3\v\r\5\e\l\n\1\g\4\j\6\q\d\n\5\e\h\v\s\9\9\b\u\2\t\e\7\6\4\g\k\v\3\v\b\w\1\w\h\9\r\d\p\u\w\s\8\4\t\g\6\m\k\c\1\v\p\v\p\z\g\5\y\v\4\7\g\1\8\s\2\s\c\h\w\w\1\d\w\v\j\f\a\r\d\t\m\5\q\o\2\v\j\c\8\2\r\d\h\p\m\g\i\e\o\f\c\z\y\k\g\3\m\p\x\m\a\q\r\7\z\x\b\m\h\3\c\3\8\5\r\0\j\v\w\f\o\5\r\n\5\0\5\f\k\k\1\8\y\w\b\n\2\u\8\u\o\q\4\f\u\d\g\z\f\8\3\m\i\y\u\d\c\y\f\2\5\n\q\0\g\7\t\r\o\4\g\6\5\u\5\f\x\8\w\c\1\3\c\k\d\e\i\0\z\i\y\5\9\v\1\k\r\5\s\4\f\m\i\5\d\i\g\u\7\s\n\r\j\d\p\2\l\g\4\z\2\6\v\9\d\c\x\j\3\l\s\f\u\u\a\0\b\p\0\d\2\s\2\9\c\q\d\4\y\m\q\m\h\7\9\4\x\7\3\g\u\d\w\s\0\3\p\3\k\b\a\4\g\l\a\g\r\8\r\7\h\3\7\y\x\t\x\j\0\h\h\z\o\u\t\1\x\k\o\t\g\k\0\x\8\r\1\f\g\m\1\l\6\s\9\b\9\2\r\2\c\c\t\f\z\m\r\p\3\x\f\b\z\e\f\k\3\c\5\2\2\b\z\t\l\6\t\2\j\2\3\u\m\q\k\2\b\9\i\v\i\j\s\t\a\n\e\e\q\m\4\0\f\j\f\1\2\o\a\o\9\n\l\f\b\v\l\h\d\i\1\o\l\6\q\d\x\n\s\l\v\1\y\v\h\f\9\f\x\t\z\s\5\3\w\5\9\j\0\l\d\p\0\p\4\c\a\a\g\n\9\n\u\g\j\e\r\6\y\i\6\n\o\q\6\i\n\p\c\i\7\o\z\a\9\c\u\a\u\v\s\2\3\5\n\3\p\p\7\v\y\2\h\k\l\5\v\n\e\w\d\2\6\n\c\w\h\h\f\u\d\q\4\a\6\g\o\2\2\8\2\d\b\w\w\s\l\o\c\6\k\c\v\0\w\f\5\g\b\q ]] 00:09:32.565 00:09:32.565 real 0m1.492s 00:09:32.565 user 0m1.047s 00:09:32.565 sys 0m0.615s 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:09:32.565 23:09:54 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:32.565 [2024-07-24 23:09:54.986712] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:32.566 [2024-07-24 23:09:54.986806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63173 ] 00:09:32.566 { 00:09:32.566 "subsystems": [ 00:09:32.566 { 00:09:32.566 "subsystem": "bdev", 00:09:32.566 "config": [ 00:09:32.566 { 00:09:32.566 "params": { 00:09:32.566 "trtype": "pcie", 00:09:32.566 "traddr": "0000:00:10.0", 00:09:32.566 "name": "Nvme0" 00:09:32.566 }, 00:09:32.566 "method": "bdev_nvme_attach_controller" 00:09:32.566 }, 00:09:32.566 { 00:09:32.566 "method": "bdev_wait_for_examine" 00:09:32.566 } 00:09:32.566 ] 00:09:32.566 } 00:09:32.566 ] 00:09:32.566 } 00:09:32.824 [2024-07-24 23:09:55.126418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.824 [2024-07-24 23:09:55.257574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.081 [2024-07-24 23:09:55.314227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.339  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:33.339 00:09:33.339 23:09:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:33.339 ************************************ 00:09:33.339 END TEST spdk_dd_basic_rw 00:09:33.339 ************************************ 00:09:33.339 00:09:33.339 real 0m19.802s 00:09:33.339 user 0m14.502s 00:09:33.339 sys 0m6.941s 00:09:33.339 23:09:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.339 23:09:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 23:09:55 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:33.339 23:09:55 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:33.339 23:09:55 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.339 23:09:55 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.339 23:09:55 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:33.339 ************************************ 00:09:33.339 START TEST spdk_dd_posix 00:09:33.339 ************************************ 00:09:33.339 23:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:09:33.339 * Looking for test storage... 00:09:33.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:33.339 23:09:55 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:09:33.597 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:09:33.598 * First test run, liburing in use 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:33.598 ************************************ 00:09:33.598 START TEST dd_flag_append 00:09:33.598 ************************************ 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=eqer3vwrt9tduq1lb7xmel02oew4syw7 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=uf31whjjcrglmd4r5e0suy0jhqhsajh8 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s eqer3vwrt9tduq1lb7xmel02oew4syw7 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s uf31whjjcrglmd4r5e0suy0jhqhsajh8 00:09:33.598 23:09:55 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:33.598 [2024-07-24 23:09:55.905556] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:33.598 [2024-07-24 23:09:55.905652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63231 ] 00:09:33.598 [2024-07-24 23:09:56.049627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.856 [2024-07-24 23:09:56.183011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.856 [2024-07-24 23:09:56.241230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.114  Copying: 32/32 [B] (average 31 kBps) 00:09:34.114 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ uf31whjjcrglmd4r5e0suy0jhqhsajh8eqer3vwrt9tduq1lb7xmel02oew4syw7 == \u\f\3\1\w\h\j\j\c\r\g\l\m\d\4\r\5\e\0\s\u\y\0\j\h\q\h\s\a\j\h\8\e\q\e\r\3\v\w\r\t\9\t\d\u\q\1\l\b\7\x\m\e\l\0\2\o\e\w\4\s\y\w\7 ]] 00:09:34.114 00:09:34.114 real 0m0.658s 00:09:34.114 user 0m0.389s 00:09:34.114 sys 0m0.291s 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:09:34.114 ************************************ 00:09:34.114 END TEST dd_flag_append 00:09:34.114 ************************************ 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:34.114 ************************************ 00:09:34.114 START TEST dd_flag_directory 00:09:34.114 ************************************ 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.114 23:09:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:34.399 [2024-07-24 23:09:56.603549] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:34.399 [2024-07-24 23:09:56.603647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63265 ] 00:09:34.399 [2024-07-24 23:09:56.744822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.399 [2024-07-24 23:09:56.873364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.673 [2024-07-24 23:09:56.930481] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:34.673 [2024-07-24 23:09:56.966731] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:34.673 [2024-07-24 23:09:56.966793] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:34.673 [2024-07-24 23:09:56.966812] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.673 [2024-07-24 23:09:57.081037] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.932 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.933 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:34.933 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:34.933 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:34.933 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:34.933 [2024-07-24 23:09:57.273588] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:34.933 [2024-07-24 23:09:57.273791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63275 ] 00:09:34.933 [2024-07-24 23:09:57.414513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.191 [2024-07-24 23:09:57.529834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.191 [2024-07-24 23:09:57.583742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.191 [2024-07-24 23:09:57.617284] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:35.191 [2024-07-24 23:09:57.617339] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:35.191 [2024-07-24 23:09:57.617356] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.450 [2024-07-24 23:09:57.729081] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:35.450 00:09:35.450 real 0m1.283s 00:09:35.450 user 0m0.749s 00:09:35.450 sys 0m0.321s 00:09:35.450 ************************************ 00:09:35.450 END TEST dd_flag_directory 00:09:35.450 ************************************ 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:35.450 ************************************ 00:09:35.450 START TEST dd_flag_nofollow 00:09:35.450 ************************************ 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:35.450 23:09:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:35.707 [2024-07-24 23:09:57.946214] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:35.707 [2024-07-24 23:09:57.946307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63303 ] 00:09:35.707 [2024-07-24 23:09:58.085375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.964 [2024-07-24 23:09:58.209993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.964 [2024-07-24 23:09:58.262273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.964 [2024-07-24 23:09:58.294298] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:35.964 [2024-07-24 23:09:58.294355] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:35.964 [2024-07-24 23:09:58.294372] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.964 [2024-07-24 23:09:58.402310] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:36.222 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:09:36.222 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:36.222 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:09:36.222 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:09:36.222 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:09:36.222 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.223 23:09:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:36.223 [2024-07-24 23:09:58.552453] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:36.223 [2024-07-24 23:09:58.552536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63317 ] 00:09:36.223 [2024-07-24 23:09:58.685598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.480 [2024-07-24 23:09:58.817717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.480 [2024-07-24 23:09:58.876202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.480 [2024-07-24 23:09:58.912409] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:36.480 [2024-07-24 23:09:58.912479] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:36.480 [2024-07-24 23:09:58.912500] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:36.737 [2024-07-24 23:09:59.031838] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:36.737 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:36.737 [2024-07-24 23:09:59.198467] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:36.737 [2024-07-24 23:09:59.198792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63326 ] 00:09:36.994 [2024-07-24 23:09:59.338126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.994 [2024-07-24 23:09:59.455090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.251 [2024-07-24 23:09:59.509465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:37.509  Copying: 512/512 [B] (average 500 kBps) 00:09:37.509 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 0qar7d7vb85v0hmqs7jak5kwt5och18kas1w5t0okcym75ud90utmlen67thu3kifx1h4jxgn22u4lqprfpwgs6wyaqhueqps57nala3bawg91jriqapsx1r3u1f11aumy7wolnpii5fk40pf1tn3fzkv5w5nl6x5jrx1xkr51kh7ljnidww6yu9mx9gh3edxizpcnxyrbuuw35xinnb6x73njmybkf9b86ewx9kvnfk3pt5ppce9uoz07ow7ynr4u2hpgfoqfuhvzr9pt4q9h0t76v4i6j7itade6f4edcft18j207o0dejz33pnyk28nge76xkp6z9j451vlaonqbl9kddp9z7lzohjslx1olaz1xne0tmujua4hmltz3l54n8qjhvloddkd9qhncuy6ybkbhepxfqj3dkfffcxkce8o4slaq90mxyv7g3h3t5b88dvef314kdco9bdsh83v77bs0701j8nc8c090kr6fmjjg0e0paly4u2f5d417r == \0\q\a\r\7\d\7\v\b\8\5\v\0\h\m\q\s\7\j\a\k\5\k\w\t\5\o\c\h\1\8\k\a\s\1\w\5\t\0\o\k\c\y\m\7\5\u\d\9\0\u\t\m\l\e\n\6\7\t\h\u\3\k\i\f\x\1\h\4\j\x\g\n\2\2\u\4\l\q\p\r\f\p\w\g\s\6\w\y\a\q\h\u\e\q\p\s\5\7\n\a\l\a\3\b\a\w\g\9\1\j\r\i\q\a\p\s\x\1\r\3\u\1\f\1\1\a\u\m\y\7\w\o\l\n\p\i\i\5\f\k\4\0\p\f\1\t\n\3\f\z\k\v\5\w\5\n\l\6\x\5\j\r\x\1\x\k\r\5\1\k\h\7\l\j\n\i\d\w\w\6\y\u\9\m\x\9\g\h\3\e\d\x\i\z\p\c\n\x\y\r\b\u\u\w\3\5\x\i\n\n\b\6\x\7\3\n\j\m\y\b\k\f\9\b\8\6\e\w\x\9\k\v\n\f\k\3\p\t\5\p\p\c\e\9\u\o\z\0\7\o\w\7\y\n\r\4\u\2\h\p\g\f\o\q\f\u\h\v\z\r\9\p\t\4\q\9\h\0\t\7\6\v\4\i\6\j\7\i\t\a\d\e\6\f\4\e\d\c\f\t\1\8\j\2\0\7\o\0\d\e\j\z\3\3\p\n\y\k\2\8\n\g\e\7\6\x\k\p\6\z\9\j\4\5\1\v\l\a\o\n\q\b\l\9\k\d\d\p\9\z\7\l\z\o\h\j\s\l\x\1\o\l\a\z\1\x\n\e\0\t\m\u\j\u\a\4\h\m\l\t\z\3\l\5\4\n\8\q\j\h\v\l\o\d\d\k\d\9\q\h\n\c\u\y\6\y\b\k\b\h\e\p\x\f\q\j\3\d\k\f\f\f\c\x\k\c\e\8\o\4\s\l\a\q\9\0\m\x\y\v\7\g\3\h\3\t\5\b\8\8\d\v\e\f\3\1\4\k\d\c\o\9\b\d\s\h\8\3\v\7\7\b\s\0\7\0\1\j\8\n\c\8\c\0\9\0\k\r\6\f\m\j\j\g\0\e\0\p\a\l\y\4\u\2\f\5\d\4\1\7\r ]] 00:09:37.509 00:09:37.509 real 0m1.876s 00:09:37.509 user 0m1.095s 00:09:37.509 sys 0m0.581s 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:09:37.509 ************************************ 00:09:37.509 END TEST dd_flag_nofollow 00:09:37.509 ************************************ 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:37.509 ************************************ 00:09:37.509 START TEST dd_flag_noatime 00:09:37.509 ************************************ 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721862599 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721862599 00:09:37.509 23:09:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:09:38.441 23:10:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:38.441 [2024-07-24 23:10:00.880953] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:38.441 [2024-07-24 23:10:00.881061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63368 ] 00:09:38.699 [2024-07-24 23:10:01.017154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.699 [2024-07-24 23:10:01.138592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.958 [2024-07-24 23:10:01.195753] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:38.958  Copying: 512/512 [B] (average 500 kBps) 00:09:38.958 00:09:39.215 23:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:39.215 23:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721862599 )) 00:09:39.215 23:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:39.215 23:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721862599 )) 00:09:39.215 23:10:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:39.215 [2024-07-24 23:10:01.509951] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:39.215 [2024-07-24 23:10:01.510083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63384 ] 00:09:39.215 [2024-07-24 23:10:01.652067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.472 [2024-07-24 23:10:01.772112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.472 [2024-07-24 23:10:01.826721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:39.730  Copying: 512/512 [B] (average 500 kBps) 00:09:39.730 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:39.730 ************************************ 00:09:39.730 END TEST dd_flag_noatime 00:09:39.730 ************************************ 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721862601 )) 00:09:39.730 00:09:39.730 real 0m2.274s 00:09:39.730 user 0m0.737s 00:09:39.730 sys 0m0.576s 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:39.730 ************************************ 00:09:39.730 START TEST dd_flags_misc 00:09:39.730 ************************************ 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:39.730 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:39.730 [2024-07-24 23:10:02.190795] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:39.730 [2024-07-24 23:10:02.190894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:09:39.987 [2024-07-24 23:10:02.323259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.987 [2024-07-24 23:10:02.456214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.272 [2024-07-24 23:10:02.516452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:40.543  Copying: 512/512 [B] (average 500 kBps) 00:09:40.543 00:09:40.544 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t449sjut9v0yz42xjufl2tmhbmbx20c8jiho26eg4xlcy2mlgyykj26qfhrarflyr2lhfaeoqmleyjnf35lqa3lm746338qim6fvjyid8kgwrj9cw7bvz1j9h8gxtwbe4tomm7m9mib95dj9rzuep3edtfkwpea3b84aisrnkr7cx8lfhz7ix5edypy53xrifaz7utnk9wjiyxobtu1l839rt0dwmhj43hvyf0vt9gg43mp3zckdxk6gl4yvbkf75jel94spi9krgu524zecgh3f0rv1bla7hzon6rqd4fdeyxqo4d2o4updkhp5tlfugs781qmkr5kqdaaabw6k394umasewarkems4t45nr5vxif5010rnq90a4bufdg3ix1eh0hroh230qxd3hitchpzrgigmeduddxxr2ox6w4mdkuanuz8nuckmb1pekpbiraokai3eyrjevacb3xu3t4ptnk7syktvoanjfbwawp16p70eqf5tfjnc59qku4y5 == \t\4\4\9\s\j\u\t\9\v\0\y\z\4\2\x\j\u\f\l\2\t\m\h\b\m\b\x\2\0\c\8\j\i\h\o\2\6\e\g\4\x\l\c\y\2\m\l\g\y\y\k\j\2\6\q\f\h\r\a\r\f\l\y\r\2\l\h\f\a\e\o\q\m\l\e\y\j\n\f\3\5\l\q\a\3\l\m\7\4\6\3\3\8\q\i\m\6\f\v\j\y\i\d\8\k\g\w\r\j\9\c\w\7\b\v\z\1\j\9\h\8\g\x\t\w\b\e\4\t\o\m\m\7\m\9\m\i\b\9\5\d\j\9\r\z\u\e\p\3\e\d\t\f\k\w\p\e\a\3\b\8\4\a\i\s\r\n\k\r\7\c\x\8\l\f\h\z\7\i\x\5\e\d\y\p\y\5\3\x\r\i\f\a\z\7\u\t\n\k\9\w\j\i\y\x\o\b\t\u\1\l\8\3\9\r\t\0\d\w\m\h\j\4\3\h\v\y\f\0\v\t\9\g\g\4\3\m\p\3\z\c\k\d\x\k\6\g\l\4\y\v\b\k\f\7\5\j\e\l\9\4\s\p\i\9\k\r\g\u\5\2\4\z\e\c\g\h\3\f\0\r\v\1\b\l\a\7\h\z\o\n\6\r\q\d\4\f\d\e\y\x\q\o\4\d\2\o\4\u\p\d\k\h\p\5\t\l\f\u\g\s\7\8\1\q\m\k\r\5\k\q\d\a\a\a\b\w\6\k\3\9\4\u\m\a\s\e\w\a\r\k\e\m\s\4\t\4\5\n\r\5\v\x\i\f\5\0\1\0\r\n\q\9\0\a\4\b\u\f\d\g\3\i\x\1\e\h\0\h\r\o\h\2\3\0\q\x\d\3\h\i\t\c\h\p\z\r\g\i\g\m\e\d\u\d\d\x\x\r\2\o\x\6\w\4\m\d\k\u\a\n\u\z\8\n\u\c\k\m\b\1\p\e\k\p\b\i\r\a\o\k\a\i\3\e\y\r\j\e\v\a\c\b\3\x\u\3\t\4\p\t\n\k\7\s\y\k\t\v\o\a\n\j\f\b\w\a\w\p\1\6\p\7\0\e\q\f\5\t\f\j\n\c\5\9\q\k\u\4\y\5 ]] 00:09:40.544 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:40.544 23:10:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:40.544 [2024-07-24 23:10:02.835061] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:40.544 [2024-07-24 23:10:02.835177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63426 ] 00:09:40.544 [2024-07-24 23:10:02.969959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.801 [2024-07-24 23:10:03.105618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.801 [2024-07-24 23:10:03.160459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:41.058  Copying: 512/512 [B] (average 500 kBps) 00:09:41.058 00:09:41.058 23:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t449sjut9v0yz42xjufl2tmhbmbx20c8jiho26eg4xlcy2mlgyykj26qfhrarflyr2lhfaeoqmleyjnf35lqa3lm746338qim6fvjyid8kgwrj9cw7bvz1j9h8gxtwbe4tomm7m9mib95dj9rzuep3edtfkwpea3b84aisrnkr7cx8lfhz7ix5edypy53xrifaz7utnk9wjiyxobtu1l839rt0dwmhj43hvyf0vt9gg43mp3zckdxk6gl4yvbkf75jel94spi9krgu524zecgh3f0rv1bla7hzon6rqd4fdeyxqo4d2o4updkhp5tlfugs781qmkr5kqdaaabw6k394umasewarkems4t45nr5vxif5010rnq90a4bufdg3ix1eh0hroh230qxd3hitchpzrgigmeduddxxr2ox6w4mdkuanuz8nuckmb1pekpbiraokai3eyrjevacb3xu3t4ptnk7syktvoanjfbwawp16p70eqf5tfjnc59qku4y5 == \t\4\4\9\s\j\u\t\9\v\0\y\z\4\2\x\j\u\f\l\2\t\m\h\b\m\b\x\2\0\c\8\j\i\h\o\2\6\e\g\4\x\l\c\y\2\m\l\g\y\y\k\j\2\6\q\f\h\r\a\r\f\l\y\r\2\l\h\f\a\e\o\q\m\l\e\y\j\n\f\3\5\l\q\a\3\l\m\7\4\6\3\3\8\q\i\m\6\f\v\j\y\i\d\8\k\g\w\r\j\9\c\w\7\b\v\z\1\j\9\h\8\g\x\t\w\b\e\4\t\o\m\m\7\m\9\m\i\b\9\5\d\j\9\r\z\u\e\p\3\e\d\t\f\k\w\p\e\a\3\b\8\4\a\i\s\r\n\k\r\7\c\x\8\l\f\h\z\7\i\x\5\e\d\y\p\y\5\3\x\r\i\f\a\z\7\u\t\n\k\9\w\j\i\y\x\o\b\t\u\1\l\8\3\9\r\t\0\d\w\m\h\j\4\3\h\v\y\f\0\v\t\9\g\g\4\3\m\p\3\z\c\k\d\x\k\6\g\l\4\y\v\b\k\f\7\5\j\e\l\9\4\s\p\i\9\k\r\g\u\5\2\4\z\e\c\g\h\3\f\0\r\v\1\b\l\a\7\h\z\o\n\6\r\q\d\4\f\d\e\y\x\q\o\4\d\2\o\4\u\p\d\k\h\p\5\t\l\f\u\g\s\7\8\1\q\m\k\r\5\k\q\d\a\a\a\b\w\6\k\3\9\4\u\m\a\s\e\w\a\r\k\e\m\s\4\t\4\5\n\r\5\v\x\i\f\5\0\1\0\r\n\q\9\0\a\4\b\u\f\d\g\3\i\x\1\e\h\0\h\r\o\h\2\3\0\q\x\d\3\h\i\t\c\h\p\z\r\g\i\g\m\e\d\u\d\d\x\x\r\2\o\x\6\w\4\m\d\k\u\a\n\u\z\8\n\u\c\k\m\b\1\p\e\k\p\b\i\r\a\o\k\a\i\3\e\y\r\j\e\v\a\c\b\3\x\u\3\t\4\p\t\n\k\7\s\y\k\t\v\o\a\n\j\f\b\w\a\w\p\1\6\p\7\0\e\q\f\5\t\f\j\n\c\5\9\q\k\u\4\y\5 ]] 00:09:41.058 23:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:41.058 23:10:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:41.058 [2024-07-24 23:10:03.477850] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:41.058 [2024-07-24 23:10:03.477962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63437 ] 00:09:41.315 [2024-07-24 23:10:03.616004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.315 [2024-07-24 23:10:03.730933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.315 [2024-07-24 23:10:03.784599] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:41.574  Copying: 512/512 [B] (average 125 kBps) 00:09:41.574 00:09:41.574 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t449sjut9v0yz42xjufl2tmhbmbx20c8jiho26eg4xlcy2mlgyykj26qfhrarflyr2lhfaeoqmleyjnf35lqa3lm746338qim6fvjyid8kgwrj9cw7bvz1j9h8gxtwbe4tomm7m9mib95dj9rzuep3edtfkwpea3b84aisrnkr7cx8lfhz7ix5edypy53xrifaz7utnk9wjiyxobtu1l839rt0dwmhj43hvyf0vt9gg43mp3zckdxk6gl4yvbkf75jel94spi9krgu524zecgh3f0rv1bla7hzon6rqd4fdeyxqo4d2o4updkhp5tlfugs781qmkr5kqdaaabw6k394umasewarkems4t45nr5vxif5010rnq90a4bufdg3ix1eh0hroh230qxd3hitchpzrgigmeduddxxr2ox6w4mdkuanuz8nuckmb1pekpbiraokai3eyrjevacb3xu3t4ptnk7syktvoanjfbwawp16p70eqf5tfjnc59qku4y5 == \t\4\4\9\s\j\u\t\9\v\0\y\z\4\2\x\j\u\f\l\2\t\m\h\b\m\b\x\2\0\c\8\j\i\h\o\2\6\e\g\4\x\l\c\y\2\m\l\g\y\y\k\j\2\6\q\f\h\r\a\r\f\l\y\r\2\l\h\f\a\e\o\q\m\l\e\y\j\n\f\3\5\l\q\a\3\l\m\7\4\6\3\3\8\q\i\m\6\f\v\j\y\i\d\8\k\g\w\r\j\9\c\w\7\b\v\z\1\j\9\h\8\g\x\t\w\b\e\4\t\o\m\m\7\m\9\m\i\b\9\5\d\j\9\r\z\u\e\p\3\e\d\t\f\k\w\p\e\a\3\b\8\4\a\i\s\r\n\k\r\7\c\x\8\l\f\h\z\7\i\x\5\e\d\y\p\y\5\3\x\r\i\f\a\z\7\u\t\n\k\9\w\j\i\y\x\o\b\t\u\1\l\8\3\9\r\t\0\d\w\m\h\j\4\3\h\v\y\f\0\v\t\9\g\g\4\3\m\p\3\z\c\k\d\x\k\6\g\l\4\y\v\b\k\f\7\5\j\e\l\9\4\s\p\i\9\k\r\g\u\5\2\4\z\e\c\g\h\3\f\0\r\v\1\b\l\a\7\h\z\o\n\6\r\q\d\4\f\d\e\y\x\q\o\4\d\2\o\4\u\p\d\k\h\p\5\t\l\f\u\g\s\7\8\1\q\m\k\r\5\k\q\d\a\a\a\b\w\6\k\3\9\4\u\m\a\s\e\w\a\r\k\e\m\s\4\t\4\5\n\r\5\v\x\i\f\5\0\1\0\r\n\q\9\0\a\4\b\u\f\d\g\3\i\x\1\e\h\0\h\r\o\h\2\3\0\q\x\d\3\h\i\t\c\h\p\z\r\g\i\g\m\e\d\u\d\d\x\x\r\2\o\x\6\w\4\m\d\k\u\a\n\u\z\8\n\u\c\k\m\b\1\p\e\k\p\b\i\r\a\o\k\a\i\3\e\y\r\j\e\v\a\c\b\3\x\u\3\t\4\p\t\n\k\7\s\y\k\t\v\o\a\n\j\f\b\w\a\w\p\1\6\p\7\0\e\q\f\5\t\f\j\n\c\5\9\q\k\u\4\y\5 ]] 00:09:41.574 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:41.574 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:41.832 [2024-07-24 23:10:04.094194] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:41.832 [2024-07-24 23:10:04.094274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63446 ] 00:09:41.832 [2024-07-24 23:10:04.228807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.090 [2024-07-24 23:10:04.344583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.090 [2024-07-24 23:10:04.397926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.349  Copying: 512/512 [B] (average 250 kBps) 00:09:42.349 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t449sjut9v0yz42xjufl2tmhbmbx20c8jiho26eg4xlcy2mlgyykj26qfhrarflyr2lhfaeoqmleyjnf35lqa3lm746338qim6fvjyid8kgwrj9cw7bvz1j9h8gxtwbe4tomm7m9mib95dj9rzuep3edtfkwpea3b84aisrnkr7cx8lfhz7ix5edypy53xrifaz7utnk9wjiyxobtu1l839rt0dwmhj43hvyf0vt9gg43mp3zckdxk6gl4yvbkf75jel94spi9krgu524zecgh3f0rv1bla7hzon6rqd4fdeyxqo4d2o4updkhp5tlfugs781qmkr5kqdaaabw6k394umasewarkems4t45nr5vxif5010rnq90a4bufdg3ix1eh0hroh230qxd3hitchpzrgigmeduddxxr2ox6w4mdkuanuz8nuckmb1pekpbiraokai3eyrjevacb3xu3t4ptnk7syktvoanjfbwawp16p70eqf5tfjnc59qku4y5 == \t\4\4\9\s\j\u\t\9\v\0\y\z\4\2\x\j\u\f\l\2\t\m\h\b\m\b\x\2\0\c\8\j\i\h\o\2\6\e\g\4\x\l\c\y\2\m\l\g\y\y\k\j\2\6\q\f\h\r\a\r\f\l\y\r\2\l\h\f\a\e\o\q\m\l\e\y\j\n\f\3\5\l\q\a\3\l\m\7\4\6\3\3\8\q\i\m\6\f\v\j\y\i\d\8\k\g\w\r\j\9\c\w\7\b\v\z\1\j\9\h\8\g\x\t\w\b\e\4\t\o\m\m\7\m\9\m\i\b\9\5\d\j\9\r\z\u\e\p\3\e\d\t\f\k\w\p\e\a\3\b\8\4\a\i\s\r\n\k\r\7\c\x\8\l\f\h\z\7\i\x\5\e\d\y\p\y\5\3\x\r\i\f\a\z\7\u\t\n\k\9\w\j\i\y\x\o\b\t\u\1\l\8\3\9\r\t\0\d\w\m\h\j\4\3\h\v\y\f\0\v\t\9\g\g\4\3\m\p\3\z\c\k\d\x\k\6\g\l\4\y\v\b\k\f\7\5\j\e\l\9\4\s\p\i\9\k\r\g\u\5\2\4\z\e\c\g\h\3\f\0\r\v\1\b\l\a\7\h\z\o\n\6\r\q\d\4\f\d\e\y\x\q\o\4\d\2\o\4\u\p\d\k\h\p\5\t\l\f\u\g\s\7\8\1\q\m\k\r\5\k\q\d\a\a\a\b\w\6\k\3\9\4\u\m\a\s\e\w\a\r\k\e\m\s\4\t\4\5\n\r\5\v\x\i\f\5\0\1\0\r\n\q\9\0\a\4\b\u\f\d\g\3\i\x\1\e\h\0\h\r\o\h\2\3\0\q\x\d\3\h\i\t\c\h\p\z\r\g\i\g\m\e\d\u\d\d\x\x\r\2\o\x\6\w\4\m\d\k\u\a\n\u\z\8\n\u\c\k\m\b\1\p\e\k\p\b\i\r\a\o\k\a\i\3\e\y\r\j\e\v\a\c\b\3\x\u\3\t\4\p\t\n\k\7\s\y\k\t\v\o\a\n\j\f\b\w\a\w\p\1\6\p\7\0\e\q\f\5\t\f\j\n\c\5\9\q\k\u\4\y\5 ]] 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:42.349 23:10:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:42.349 [2024-07-24 23:10:04.704879] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:42.349 [2024-07-24 23:10:04.704979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63456 ] 00:09:42.607 [2024-07-24 23:10:04.843078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.607 [2024-07-24 23:10:04.984505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.607 [2024-07-24 23:10:05.037622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:42.866  Copying: 512/512 [B] (average 500 kBps) 00:09:42.866 00:09:42.866 23:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jfs92p73gn4t1fqjfdc0i13bwn18fcpll6a85w6nlcza7gmqxrmxgpxzzjft584qw41436tgjxyeve76dagxc3l78p3ps7xoxopqc9ha4tewxkfew54a1ttg5beh966irpqst6vn90uiro3bjn2ej4acmelkxu8xd8agzh29ffm9ftvthor4vrp2pr9xeyx4vl4nadqj5t954ry7w8dl3ihpkyuqmfnp5paf3u6liuwz4g2qgvstkzb2cf6yeciij9a1gzon4mtpvnz5wa888s342z518fw7m5ytn5q8plx6mbslpa0iouoq1fqya6mchrhot5sy7ggzl74lbyjv8ncr4v13od9ud22x4zs410ey8tkdddbn8f1xr49odtb62kbhkwkyz7opabiggf8mndp8pehx417t09zofiqs2vv1a64xf4vu4ien50qsf7mfdu6tqj6cyyznwexvzd9rr3s2w7as3wprhtp9wfbtwed042l9gd1m10imq5lzpzop == \j\f\s\9\2\p\7\3\g\n\4\t\1\f\q\j\f\d\c\0\i\1\3\b\w\n\1\8\f\c\p\l\l\6\a\8\5\w\6\n\l\c\z\a\7\g\m\q\x\r\m\x\g\p\x\z\z\j\f\t\5\8\4\q\w\4\1\4\3\6\t\g\j\x\y\e\v\e\7\6\d\a\g\x\c\3\l\7\8\p\3\p\s\7\x\o\x\o\p\q\c\9\h\a\4\t\e\w\x\k\f\e\w\5\4\a\1\t\t\g\5\b\e\h\9\6\6\i\r\p\q\s\t\6\v\n\9\0\u\i\r\o\3\b\j\n\2\e\j\4\a\c\m\e\l\k\x\u\8\x\d\8\a\g\z\h\2\9\f\f\m\9\f\t\v\t\h\o\r\4\v\r\p\2\p\r\9\x\e\y\x\4\v\l\4\n\a\d\q\j\5\t\9\5\4\r\y\7\w\8\d\l\3\i\h\p\k\y\u\q\m\f\n\p\5\p\a\f\3\u\6\l\i\u\w\z\4\g\2\q\g\v\s\t\k\z\b\2\c\f\6\y\e\c\i\i\j\9\a\1\g\z\o\n\4\m\t\p\v\n\z\5\w\a\8\8\8\s\3\4\2\z\5\1\8\f\w\7\m\5\y\t\n\5\q\8\p\l\x\6\m\b\s\l\p\a\0\i\o\u\o\q\1\f\q\y\a\6\m\c\h\r\h\o\t\5\s\y\7\g\g\z\l\7\4\l\b\y\j\v\8\n\c\r\4\v\1\3\o\d\9\u\d\2\2\x\4\z\s\4\1\0\e\y\8\t\k\d\d\d\b\n\8\f\1\x\r\4\9\o\d\t\b\6\2\k\b\h\k\w\k\y\z\7\o\p\a\b\i\g\g\f\8\m\n\d\p\8\p\e\h\x\4\1\7\t\0\9\z\o\f\i\q\s\2\v\v\1\a\6\4\x\f\4\v\u\4\i\e\n\5\0\q\s\f\7\m\f\d\u\6\t\q\j\6\c\y\y\z\n\w\e\x\v\z\d\9\r\r\3\s\2\w\7\a\s\3\w\p\r\h\t\p\9\w\f\b\t\w\e\d\0\4\2\l\9\g\d\1\m\1\0\i\m\q\5\l\z\p\z\o\p ]] 00:09:42.866 23:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:42.866 23:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:42.866 [2024-07-24 23:10:05.336083] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:42.866 [2024-07-24 23:10:05.336194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63471 ] 00:09:43.124 [2024-07-24 23:10:05.476964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.124 [2024-07-24 23:10:05.607518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.382 [2024-07-24 23:10:05.664295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.640  Copying: 512/512 [B] (average 500 kBps) 00:09:43.640 00:09:43.640 23:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jfs92p73gn4t1fqjfdc0i13bwn18fcpll6a85w6nlcza7gmqxrmxgpxzzjft584qw41436tgjxyeve76dagxc3l78p3ps7xoxopqc9ha4tewxkfew54a1ttg5beh966irpqst6vn90uiro3bjn2ej4acmelkxu8xd8agzh29ffm9ftvthor4vrp2pr9xeyx4vl4nadqj5t954ry7w8dl3ihpkyuqmfnp5paf3u6liuwz4g2qgvstkzb2cf6yeciij9a1gzon4mtpvnz5wa888s342z518fw7m5ytn5q8plx6mbslpa0iouoq1fqya6mchrhot5sy7ggzl74lbyjv8ncr4v13od9ud22x4zs410ey8tkdddbn8f1xr49odtb62kbhkwkyz7opabiggf8mndp8pehx417t09zofiqs2vv1a64xf4vu4ien50qsf7mfdu6tqj6cyyznwexvzd9rr3s2w7as3wprhtp9wfbtwed042l9gd1m10imq5lzpzop == \j\f\s\9\2\p\7\3\g\n\4\t\1\f\q\j\f\d\c\0\i\1\3\b\w\n\1\8\f\c\p\l\l\6\a\8\5\w\6\n\l\c\z\a\7\g\m\q\x\r\m\x\g\p\x\z\z\j\f\t\5\8\4\q\w\4\1\4\3\6\t\g\j\x\y\e\v\e\7\6\d\a\g\x\c\3\l\7\8\p\3\p\s\7\x\o\x\o\p\q\c\9\h\a\4\t\e\w\x\k\f\e\w\5\4\a\1\t\t\g\5\b\e\h\9\6\6\i\r\p\q\s\t\6\v\n\9\0\u\i\r\o\3\b\j\n\2\e\j\4\a\c\m\e\l\k\x\u\8\x\d\8\a\g\z\h\2\9\f\f\m\9\f\t\v\t\h\o\r\4\v\r\p\2\p\r\9\x\e\y\x\4\v\l\4\n\a\d\q\j\5\t\9\5\4\r\y\7\w\8\d\l\3\i\h\p\k\y\u\q\m\f\n\p\5\p\a\f\3\u\6\l\i\u\w\z\4\g\2\q\g\v\s\t\k\z\b\2\c\f\6\y\e\c\i\i\j\9\a\1\g\z\o\n\4\m\t\p\v\n\z\5\w\a\8\8\8\s\3\4\2\z\5\1\8\f\w\7\m\5\y\t\n\5\q\8\p\l\x\6\m\b\s\l\p\a\0\i\o\u\o\q\1\f\q\y\a\6\m\c\h\r\h\o\t\5\s\y\7\g\g\z\l\7\4\l\b\y\j\v\8\n\c\r\4\v\1\3\o\d\9\u\d\2\2\x\4\z\s\4\1\0\e\y\8\t\k\d\d\d\b\n\8\f\1\x\r\4\9\o\d\t\b\6\2\k\b\h\k\w\k\y\z\7\o\p\a\b\i\g\g\f\8\m\n\d\p\8\p\e\h\x\4\1\7\t\0\9\z\o\f\i\q\s\2\v\v\1\a\6\4\x\f\4\v\u\4\i\e\n\5\0\q\s\f\7\m\f\d\u\6\t\q\j\6\c\y\y\z\n\w\e\x\v\z\d\9\r\r\3\s\2\w\7\a\s\3\w\p\r\h\t\p\9\w\f\b\t\w\e\d\0\4\2\l\9\g\d\1\m\1\0\i\m\q\5\l\z\p\z\o\p ]] 00:09:43.640 23:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:43.640 23:10:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:43.640 [2024-07-24 23:10:05.977266] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:43.640 [2024-07-24 23:10:05.977366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63476 ] 00:09:43.640 [2024-07-24 23:10:06.119303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.898 [2024-07-24 23:10:06.252552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.898 [2024-07-24 23:10:06.311698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:44.156  Copying: 512/512 [B] (average 19 kBps) 00:09:44.156 00:09:44.156 23:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jfs92p73gn4t1fqjfdc0i13bwn18fcpll6a85w6nlcza7gmqxrmxgpxzzjft584qw41436tgjxyeve76dagxc3l78p3ps7xoxopqc9ha4tewxkfew54a1ttg5beh966irpqst6vn90uiro3bjn2ej4acmelkxu8xd8agzh29ffm9ftvthor4vrp2pr9xeyx4vl4nadqj5t954ry7w8dl3ihpkyuqmfnp5paf3u6liuwz4g2qgvstkzb2cf6yeciij9a1gzon4mtpvnz5wa888s342z518fw7m5ytn5q8plx6mbslpa0iouoq1fqya6mchrhot5sy7ggzl74lbyjv8ncr4v13od9ud22x4zs410ey8tkdddbn8f1xr49odtb62kbhkwkyz7opabiggf8mndp8pehx417t09zofiqs2vv1a64xf4vu4ien50qsf7mfdu6tqj6cyyznwexvzd9rr3s2w7as3wprhtp9wfbtwed042l9gd1m10imq5lzpzop == \j\f\s\9\2\p\7\3\g\n\4\t\1\f\q\j\f\d\c\0\i\1\3\b\w\n\1\8\f\c\p\l\l\6\a\8\5\w\6\n\l\c\z\a\7\g\m\q\x\r\m\x\g\p\x\z\z\j\f\t\5\8\4\q\w\4\1\4\3\6\t\g\j\x\y\e\v\e\7\6\d\a\g\x\c\3\l\7\8\p\3\p\s\7\x\o\x\o\p\q\c\9\h\a\4\t\e\w\x\k\f\e\w\5\4\a\1\t\t\g\5\b\e\h\9\6\6\i\r\p\q\s\t\6\v\n\9\0\u\i\r\o\3\b\j\n\2\e\j\4\a\c\m\e\l\k\x\u\8\x\d\8\a\g\z\h\2\9\f\f\m\9\f\t\v\t\h\o\r\4\v\r\p\2\p\r\9\x\e\y\x\4\v\l\4\n\a\d\q\j\5\t\9\5\4\r\y\7\w\8\d\l\3\i\h\p\k\y\u\q\m\f\n\p\5\p\a\f\3\u\6\l\i\u\w\z\4\g\2\q\g\v\s\t\k\z\b\2\c\f\6\y\e\c\i\i\j\9\a\1\g\z\o\n\4\m\t\p\v\n\z\5\w\a\8\8\8\s\3\4\2\z\5\1\8\f\w\7\m\5\y\t\n\5\q\8\p\l\x\6\m\b\s\l\p\a\0\i\o\u\o\q\1\f\q\y\a\6\m\c\h\r\h\o\t\5\s\y\7\g\g\z\l\7\4\l\b\y\j\v\8\n\c\r\4\v\1\3\o\d\9\u\d\2\2\x\4\z\s\4\1\0\e\y\8\t\k\d\d\d\b\n\8\f\1\x\r\4\9\o\d\t\b\6\2\k\b\h\k\w\k\y\z\7\o\p\a\b\i\g\g\f\8\m\n\d\p\8\p\e\h\x\4\1\7\t\0\9\z\o\f\i\q\s\2\v\v\1\a\6\4\x\f\4\v\u\4\i\e\n\5\0\q\s\f\7\m\f\d\u\6\t\q\j\6\c\y\y\z\n\w\e\x\v\z\d\9\r\r\3\s\2\w\7\a\s\3\w\p\r\h\t\p\9\w\f\b\t\w\e\d\0\4\2\l\9\g\d\1\m\1\0\i\m\q\5\l\z\p\z\o\p ]] 00:09:44.156 23:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:44.156 23:10:06 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:44.413 [2024-07-24 23:10:06.667167] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:44.413 [2024-07-24 23:10:06.667269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63490 ] 00:09:44.413 [2024-07-24 23:10:06.806803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.670 [2024-07-24 23:10:06.928376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.670 [2024-07-24 23:10:06.983479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:44.928  Copying: 512/512 [B] (average 250 kBps) 00:09:44.928 00:09:44.928 ************************************ 00:09:44.928 END TEST dd_flags_misc 00:09:44.928 ************************************ 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jfs92p73gn4t1fqjfdc0i13bwn18fcpll6a85w6nlcza7gmqxrmxgpxzzjft584qw41436tgjxyeve76dagxc3l78p3ps7xoxopqc9ha4tewxkfew54a1ttg5beh966irpqst6vn90uiro3bjn2ej4acmelkxu8xd8agzh29ffm9ftvthor4vrp2pr9xeyx4vl4nadqj5t954ry7w8dl3ihpkyuqmfnp5paf3u6liuwz4g2qgvstkzb2cf6yeciij9a1gzon4mtpvnz5wa888s342z518fw7m5ytn5q8plx6mbslpa0iouoq1fqya6mchrhot5sy7ggzl74lbyjv8ncr4v13od9ud22x4zs410ey8tkdddbn8f1xr49odtb62kbhkwkyz7opabiggf8mndp8pehx417t09zofiqs2vv1a64xf4vu4ien50qsf7mfdu6tqj6cyyznwexvzd9rr3s2w7as3wprhtp9wfbtwed042l9gd1m10imq5lzpzop == \j\f\s\9\2\p\7\3\g\n\4\t\1\f\q\j\f\d\c\0\i\1\3\b\w\n\1\8\f\c\p\l\l\6\a\8\5\w\6\n\l\c\z\a\7\g\m\q\x\r\m\x\g\p\x\z\z\j\f\t\5\8\4\q\w\4\1\4\3\6\t\g\j\x\y\e\v\e\7\6\d\a\g\x\c\3\l\7\8\p\3\p\s\7\x\o\x\o\p\q\c\9\h\a\4\t\e\w\x\k\f\e\w\5\4\a\1\t\t\g\5\b\e\h\9\6\6\i\r\p\q\s\t\6\v\n\9\0\u\i\r\o\3\b\j\n\2\e\j\4\a\c\m\e\l\k\x\u\8\x\d\8\a\g\z\h\2\9\f\f\m\9\f\t\v\t\h\o\r\4\v\r\p\2\p\r\9\x\e\y\x\4\v\l\4\n\a\d\q\j\5\t\9\5\4\r\y\7\w\8\d\l\3\i\h\p\k\y\u\q\m\f\n\p\5\p\a\f\3\u\6\l\i\u\w\z\4\g\2\q\g\v\s\t\k\z\b\2\c\f\6\y\e\c\i\i\j\9\a\1\g\z\o\n\4\m\t\p\v\n\z\5\w\a\8\8\8\s\3\4\2\z\5\1\8\f\w\7\m\5\y\t\n\5\q\8\p\l\x\6\m\b\s\l\p\a\0\i\o\u\o\q\1\f\q\y\a\6\m\c\h\r\h\o\t\5\s\y\7\g\g\z\l\7\4\l\b\y\j\v\8\n\c\r\4\v\1\3\o\d\9\u\d\2\2\x\4\z\s\4\1\0\e\y\8\t\k\d\d\d\b\n\8\f\1\x\r\4\9\o\d\t\b\6\2\k\b\h\k\w\k\y\z\7\o\p\a\b\i\g\g\f\8\m\n\d\p\8\p\e\h\x\4\1\7\t\0\9\z\o\f\i\q\s\2\v\v\1\a\6\4\x\f\4\v\u\4\i\e\n\5\0\q\s\f\7\m\f\d\u\6\t\q\j\6\c\y\y\z\n\w\e\x\v\z\d\9\r\r\3\s\2\w\7\a\s\3\w\p\r\h\t\p\9\w\f\b\t\w\e\d\0\4\2\l\9\g\d\1\m\1\0\i\m\q\5\l\z\p\z\o\p ]] 00:09:44.928 00:09:44.928 real 0m5.102s 00:09:44.928 user 0m3.051s 00:09:44.928 sys 0m2.232s 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:09:44.928 * Second test run, disabling liburing, forcing AIO 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:44.928 ************************************ 00:09:44.928 START TEST dd_flag_append_forced_aio 00:09:44.928 ************************************ 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=195n81nob5h9jk4lihjx4jm4bsrkjql4 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:44.928 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:44.929 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=ig2ax206arh9mtkljbl7vrdxuzz0ee64 00:09:44.929 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 195n81nob5h9jk4lihjx4jm4bsrkjql4 00:09:44.929 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s ig2ax206arh9mtkljbl7vrdxuzz0ee64 00:09:44.929 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:09:44.929 [2024-07-24 23:10:07.351753] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:44.929 [2024-07-24 23:10:07.351848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63524 ] 00:09:45.187 [2024-07-24 23:10:07.491057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.187 [2024-07-24 23:10:07.605232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.187 [2024-07-24 23:10:07.660157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.444  Copying: 32/32 [B] (average 31 kBps) 00:09:45.444 00:09:45.444 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ ig2ax206arh9mtkljbl7vrdxuzz0ee64195n81nob5h9jk4lihjx4jm4bsrkjql4 == \i\g\2\a\x\2\0\6\a\r\h\9\m\t\k\l\j\b\l\7\v\r\d\x\u\z\z\0\e\e\6\4\1\9\5\n\8\1\n\o\b\5\h\9\j\k\4\l\i\h\j\x\4\j\m\4\b\s\r\k\j\q\l\4 ]] 00:09:45.444 00:09:45.444 real 0m0.631s 00:09:45.444 user 0m0.355s 00:09:45.444 sys 0m0.154s 00:09:45.444 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.444 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:45.444 ************************************ 00:09:45.444 END TEST dd_flag_append_forced_aio 00:09:45.444 ************************************ 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:45.702 ************************************ 00:09:45.702 START TEST dd_flag_directory_forced_aio 00:09:45.702 ************************************ 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:45.702 23:10:07 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:45.702 [2024-07-24 23:10:08.027392] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:45.703 [2024-07-24 23:10:08.027498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63545 ] 00:09:45.703 [2024-07-24 23:10:08.167652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.959 [2024-07-24 23:10:08.303402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.959 [2024-07-24 23:10:08.359636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.959 [2024-07-24 23:10:08.396065] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:45.959 [2024-07-24 23:10:08.396153] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:45.959 [2024-07-24 23:10:08.396175] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:46.217 [2024-07-24 23:10:08.511495] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:46.217 23:10:08 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:09:46.217 [2024-07-24 23:10:08.662344] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:46.217 [2024-07-24 23:10:08.662575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63560 ] 00:09:46.474 [2024-07-24 23:10:08.796256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.474 [2024-07-24 23:10:08.922589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.733 [2024-07-24 23:10:08.980151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:46.733 [2024-07-24 23:10:09.019647] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:46.733 [2024-07-24 23:10:09.019722] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:09:46.733 [2024-07-24 23:10:09.019741] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:46.733 [2024-07-24 23:10:09.140064] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:46.992 00:09:46.992 real 0m1.273s 00:09:46.992 user 0m0.766s 00:09:46.992 sys 0m0.292s 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:46.992 ************************************ 00:09:46.992 END TEST dd_flag_directory_forced_aio 00:09:46.992 ************************************ 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:46.992 ************************************ 00:09:46.992 START TEST dd_flag_nofollow_forced_aio 00:09:46.992 ************************************ 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:46.992 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:46.992 [2024-07-24 23:10:09.359452] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:46.992 [2024-07-24 23:10:09.359557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63594 ] 00:09:47.250 [2024-07-24 23:10:09.498531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.250 [2024-07-24 23:10:09.624704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.250 [2024-07-24 23:10:09.682770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:47.250 [2024-07-24 23:10:09.717185] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:47.250 [2024-07-24 23:10:09.717250] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:09:47.250 [2024-07-24 23:10:09.717268] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.508 [2024-07-24 23:10:09.831183] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:47.508 23:10:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:09:47.508 [2024-07-24 23:10:09.980380] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:47.508 [2024-07-24 23:10:09.980478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63598 ] 00:09:47.767 [2024-07-24 23:10:10.117031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.767 [2024-07-24 23:10:10.236523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.025 [2024-07-24 23:10:10.290136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.025 [2024-07-24 23:10:10.322096] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:48.025 [2024-07-24 23:10:10.322162] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:09:48.025 [2024-07-24 23:10:10.322178] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:48.025 [2024-07-24 23:10:10.432404] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:48.284 23:10:10 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:48.284 [2024-07-24 23:10:10.588209] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:48.284 [2024-07-24 23:10:10.588309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63611 ] 00:09:48.284 [2024-07-24 23:10:10.721640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.568 [2024-07-24 23:10:10.834392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.568 [2024-07-24 23:10:10.887676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:48.837  Copying: 512/512 [B] (average 500 kBps) 00:09:48.837 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ qubshtquqeshch6rhomcb3ugfmxtxb1xmqlrhon4c7xqy3q67v57m0w7zhxvhcl8ppqe3pwpiq22kbc9i4atswrn5yihiajqnyf57xayle0pnksphnxfoxk8yr59vqxtynatrdr317u7tsjqj2cgv3fz4cuttgmakj780c5fliour3mhoz7xgnnjiko6tacaqptlkq6y2emhasnmyncatedqhltcip0y76jtkrs6ntzhhrjjgo0p5exg8hb5vj44tl9b8pwmv6nspp3oufd82dccfpulzzq3zxtejom6fh6brth6uo4witokg5xcqmx4mjb43hfjlne0kloyslvvlkvj2i0jrrfnqkp0l4zga5hmxtb8o13qn00f7aepe42jxbjpdsjvn3p5gj8ru7w51h01n7ocm1ztce8p25qyr58p068lc28x9noy4lopfo7pzji573aec9nvky0465dobosi5x506mui8dpr49jj1mj2flounsoufzoncg39owoc == \q\u\b\s\h\t\q\u\q\e\s\h\c\h\6\r\h\o\m\c\b\3\u\g\f\m\x\t\x\b\1\x\m\q\l\r\h\o\n\4\c\7\x\q\y\3\q\6\7\v\5\7\m\0\w\7\z\h\x\v\h\c\l\8\p\p\q\e\3\p\w\p\i\q\2\2\k\b\c\9\i\4\a\t\s\w\r\n\5\y\i\h\i\a\j\q\n\y\f\5\7\x\a\y\l\e\0\p\n\k\s\p\h\n\x\f\o\x\k\8\y\r\5\9\v\q\x\t\y\n\a\t\r\d\r\3\1\7\u\7\t\s\j\q\j\2\c\g\v\3\f\z\4\c\u\t\t\g\m\a\k\j\7\8\0\c\5\f\l\i\o\u\r\3\m\h\o\z\7\x\g\n\n\j\i\k\o\6\t\a\c\a\q\p\t\l\k\q\6\y\2\e\m\h\a\s\n\m\y\n\c\a\t\e\d\q\h\l\t\c\i\p\0\y\7\6\j\t\k\r\s\6\n\t\z\h\h\r\j\j\g\o\0\p\5\e\x\g\8\h\b\5\v\j\4\4\t\l\9\b\8\p\w\m\v\6\n\s\p\p\3\o\u\f\d\8\2\d\c\c\f\p\u\l\z\z\q\3\z\x\t\e\j\o\m\6\f\h\6\b\r\t\h\6\u\o\4\w\i\t\o\k\g\5\x\c\q\m\x\4\m\j\b\4\3\h\f\j\l\n\e\0\k\l\o\y\s\l\v\v\l\k\v\j\2\i\0\j\r\r\f\n\q\k\p\0\l\4\z\g\a\5\h\m\x\t\b\8\o\1\3\q\n\0\0\f\7\a\e\p\e\4\2\j\x\b\j\p\d\s\j\v\n\3\p\5\g\j\8\r\u\7\w\5\1\h\0\1\n\7\o\c\m\1\z\t\c\e\8\p\2\5\q\y\r\5\8\p\0\6\8\l\c\2\8\x\9\n\o\y\4\l\o\p\f\o\7\p\z\j\i\5\7\3\a\e\c\9\n\v\k\y\0\4\6\5\d\o\b\o\s\i\5\x\5\0\6\m\u\i\8\d\p\r\4\9\j\j\1\m\j\2\f\l\o\u\n\s\o\u\f\z\o\n\c\g\3\9\o\w\o\c ]] 00:09:48.837 00:09:48.837 real 0m1.874s 00:09:48.837 user 0m1.095s 00:09:48.837 sys 0m0.438s 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.837 ************************************ 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 END TEST dd_flag_nofollow_forced_aio 00:09:48.837 ************************************ 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 ************************************ 00:09:48.837 START TEST dd_flag_noatime_forced_aio 00:09:48.837 ************************************ 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721862610 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721862611 00:09:48.837 23:10:11 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:09:49.772 23:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:50.030 [2024-07-24 23:10:12.294890] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:50.030 [2024-07-24 23:10:12.295004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63652 ] 00:09:50.030 [2024-07-24 23:10:12.432890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.288 [2024-07-24 23:10:12.566710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.288 [2024-07-24 23:10:12.624380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:50.546  Copying: 512/512 [B] (average 500 kBps) 00:09:50.546 00:09:50.546 23:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:50.546 23:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721862610 )) 00:09:50.546 23:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:50.546 23:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721862611 )) 00:09:50.546 23:10:12 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:50.546 [2024-07-24 23:10:12.949832] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:50.546 [2024-07-24 23:10:12.949941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:09:50.805 [2024-07-24 23:10:13.081544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.805 [2024-07-24 23:10:13.197286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.805 [2024-07-24 23:10:13.250746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.075  Copying: 512/512 [B] (average 500 kBps) 00:09:51.075 00:09:51.075 23:10:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:51.075 23:10:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721862613 )) 00:09:51.075 00:09:51.075 real 0m2.331s 00:09:51.075 user 0m0.772s 00:09:51.075 sys 0m0.306s 00:09:51.075 23:10:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.075 ************************************ 00:09:51.075 END TEST dd_flag_noatime_forced_aio 00:09:51.075 23:10:13 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:51.075 ************************************ 00:09:51.340 23:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:51.341 ************************************ 00:09:51.341 START TEST dd_flags_misc_forced_aio 00:09:51.341 ************************************ 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:51.341 23:10:13 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:51.341 [2024-07-24 23:10:13.662389] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:51.341 [2024-07-24 23:10:13.662477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63695 ] 00:09:51.341 [2024-07-24 23:10:13.800017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.599 [2024-07-24 23:10:13.928635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.599 [2024-07-24 23:10:13.986246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:51.857  Copying: 512/512 [B] (average 500 kBps) 00:09:51.857 00:09:51.857 23:10:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zm4al1x9mmfzdx8ggopar7qleix8jtktlguw30y5j148rjqvmdd080m28c8jkeafrnaivtc7f6rdp0hiygvj16z7bpeus6ln65r6jxml5fz1c8b2mh0opr77vy8lg69b2hdy3c28f97tykdoeliz7ec7hx9a90388gjy7abde2ffxyohg5cvc12rhv998gdwx7thudbj5ilfjlujtpqquyrezdhs0wdk2dsw2e39a19etv04skgkkw7yqfbifitch2e53gnjq56llxptia113c8jfvfvzz24dmqmsehxwq0ezb3fejosr4aqf11q5kx3kpf7l3ni1m2h7vzstho56xugcqa4yxqh95f0ny8pve7ajudfl1syfms7z428c9e53d11w28p5c23vroaxgkhfu9td1hofohy1hkodffxeay82x4fd00ymoc56suv2qiujykk51wngmqtw9fq8vq9kqtqnz3h0x80el0zv02go4e5003zy45eo3qm7q5fvwee == \z\m\4\a\l\1\x\9\m\m\f\z\d\x\8\g\g\o\p\a\r\7\q\l\e\i\x\8\j\t\k\t\l\g\u\w\3\0\y\5\j\1\4\8\r\j\q\v\m\d\d\0\8\0\m\2\8\c\8\j\k\e\a\f\r\n\a\i\v\t\c\7\f\6\r\d\p\0\h\i\y\g\v\j\1\6\z\7\b\p\e\u\s\6\l\n\6\5\r\6\j\x\m\l\5\f\z\1\c\8\b\2\m\h\0\o\p\r\7\7\v\y\8\l\g\6\9\b\2\h\d\y\3\c\2\8\f\9\7\t\y\k\d\o\e\l\i\z\7\e\c\7\h\x\9\a\9\0\3\8\8\g\j\y\7\a\b\d\e\2\f\f\x\y\o\h\g\5\c\v\c\1\2\r\h\v\9\9\8\g\d\w\x\7\t\h\u\d\b\j\5\i\l\f\j\l\u\j\t\p\q\q\u\y\r\e\z\d\h\s\0\w\d\k\2\d\s\w\2\e\3\9\a\1\9\e\t\v\0\4\s\k\g\k\k\w\7\y\q\f\b\i\f\i\t\c\h\2\e\5\3\g\n\j\q\5\6\l\l\x\p\t\i\a\1\1\3\c\8\j\f\v\f\v\z\z\2\4\d\m\q\m\s\e\h\x\w\q\0\e\z\b\3\f\e\j\o\s\r\4\a\q\f\1\1\q\5\k\x\3\k\p\f\7\l\3\n\i\1\m\2\h\7\v\z\s\t\h\o\5\6\x\u\g\c\q\a\4\y\x\q\h\9\5\f\0\n\y\8\p\v\e\7\a\j\u\d\f\l\1\s\y\f\m\s\7\z\4\2\8\c\9\e\5\3\d\1\1\w\2\8\p\5\c\2\3\v\r\o\a\x\g\k\h\f\u\9\t\d\1\h\o\f\o\h\y\1\h\k\o\d\f\f\x\e\a\y\8\2\x\4\f\d\0\0\y\m\o\c\5\6\s\u\v\2\q\i\u\j\y\k\k\5\1\w\n\g\m\q\t\w\9\f\q\8\v\q\9\k\q\t\q\n\z\3\h\0\x\8\0\e\l\0\z\v\0\2\g\o\4\e\5\0\0\3\z\y\4\5\e\o\3\q\m\7\q\5\f\v\w\e\e ]] 00:09:51.857 23:10:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:51.857 23:10:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:51.857 [2024-07-24 23:10:14.308489] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:51.857 [2024-07-24 23:10:14.308595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63697 ] 00:09:52.115 [2024-07-24 23:10:14.447471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.116 [2024-07-24 23:10:14.586693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.373 [2024-07-24 23:10:14.644428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:52.630  Copying: 512/512 [B] (average 500 kBps) 00:09:52.631 00:09:52.631 23:10:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zm4al1x9mmfzdx8ggopar7qleix8jtktlguw30y5j148rjqvmdd080m28c8jkeafrnaivtc7f6rdp0hiygvj16z7bpeus6ln65r6jxml5fz1c8b2mh0opr77vy8lg69b2hdy3c28f97tykdoeliz7ec7hx9a90388gjy7abde2ffxyohg5cvc12rhv998gdwx7thudbj5ilfjlujtpqquyrezdhs0wdk2dsw2e39a19etv04skgkkw7yqfbifitch2e53gnjq56llxptia113c8jfvfvzz24dmqmsehxwq0ezb3fejosr4aqf11q5kx3kpf7l3ni1m2h7vzstho56xugcqa4yxqh95f0ny8pve7ajudfl1syfms7z428c9e53d11w28p5c23vroaxgkhfu9td1hofohy1hkodffxeay82x4fd00ymoc56suv2qiujykk51wngmqtw9fq8vq9kqtqnz3h0x80el0zv02go4e5003zy45eo3qm7q5fvwee == \z\m\4\a\l\1\x\9\m\m\f\z\d\x\8\g\g\o\p\a\r\7\q\l\e\i\x\8\j\t\k\t\l\g\u\w\3\0\y\5\j\1\4\8\r\j\q\v\m\d\d\0\8\0\m\2\8\c\8\j\k\e\a\f\r\n\a\i\v\t\c\7\f\6\r\d\p\0\h\i\y\g\v\j\1\6\z\7\b\p\e\u\s\6\l\n\6\5\r\6\j\x\m\l\5\f\z\1\c\8\b\2\m\h\0\o\p\r\7\7\v\y\8\l\g\6\9\b\2\h\d\y\3\c\2\8\f\9\7\t\y\k\d\o\e\l\i\z\7\e\c\7\h\x\9\a\9\0\3\8\8\g\j\y\7\a\b\d\e\2\f\f\x\y\o\h\g\5\c\v\c\1\2\r\h\v\9\9\8\g\d\w\x\7\t\h\u\d\b\j\5\i\l\f\j\l\u\j\t\p\q\q\u\y\r\e\z\d\h\s\0\w\d\k\2\d\s\w\2\e\3\9\a\1\9\e\t\v\0\4\s\k\g\k\k\w\7\y\q\f\b\i\f\i\t\c\h\2\e\5\3\g\n\j\q\5\6\l\l\x\p\t\i\a\1\1\3\c\8\j\f\v\f\v\z\z\2\4\d\m\q\m\s\e\h\x\w\q\0\e\z\b\3\f\e\j\o\s\r\4\a\q\f\1\1\q\5\k\x\3\k\p\f\7\l\3\n\i\1\m\2\h\7\v\z\s\t\h\o\5\6\x\u\g\c\q\a\4\y\x\q\h\9\5\f\0\n\y\8\p\v\e\7\a\j\u\d\f\l\1\s\y\f\m\s\7\z\4\2\8\c\9\e\5\3\d\1\1\w\2\8\p\5\c\2\3\v\r\o\a\x\g\k\h\f\u\9\t\d\1\h\o\f\o\h\y\1\h\k\o\d\f\f\x\e\a\y\8\2\x\4\f\d\0\0\y\m\o\c\5\6\s\u\v\2\q\i\u\j\y\k\k\5\1\w\n\g\m\q\t\w\9\f\q\8\v\q\9\k\q\t\q\n\z\3\h\0\x\8\0\e\l\0\z\v\0\2\g\o\4\e\5\0\0\3\z\y\4\5\e\o\3\q\m\7\q\5\f\v\w\e\e ]] 00:09:52.631 23:10:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:52.631 23:10:14 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:52.631 [2024-07-24 23:10:14.953194] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:52.631 [2024-07-24 23:10:14.953297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:09:52.631 [2024-07-24 23:10:15.091174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.888 [2024-07-24 23:10:15.225413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.888 [2024-07-24 23:10:15.284269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.146  Copying: 512/512 [B] (average 166 kBps) 00:09:53.146 00:09:53.147 23:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zm4al1x9mmfzdx8ggopar7qleix8jtktlguw30y5j148rjqvmdd080m28c8jkeafrnaivtc7f6rdp0hiygvj16z7bpeus6ln65r6jxml5fz1c8b2mh0opr77vy8lg69b2hdy3c28f97tykdoeliz7ec7hx9a90388gjy7abde2ffxyohg5cvc12rhv998gdwx7thudbj5ilfjlujtpqquyrezdhs0wdk2dsw2e39a19etv04skgkkw7yqfbifitch2e53gnjq56llxptia113c8jfvfvzz24dmqmsehxwq0ezb3fejosr4aqf11q5kx3kpf7l3ni1m2h7vzstho56xugcqa4yxqh95f0ny8pve7ajudfl1syfms7z428c9e53d11w28p5c23vroaxgkhfu9td1hofohy1hkodffxeay82x4fd00ymoc56suv2qiujykk51wngmqtw9fq8vq9kqtqnz3h0x80el0zv02go4e5003zy45eo3qm7q5fvwee == \z\m\4\a\l\1\x\9\m\m\f\z\d\x\8\g\g\o\p\a\r\7\q\l\e\i\x\8\j\t\k\t\l\g\u\w\3\0\y\5\j\1\4\8\r\j\q\v\m\d\d\0\8\0\m\2\8\c\8\j\k\e\a\f\r\n\a\i\v\t\c\7\f\6\r\d\p\0\h\i\y\g\v\j\1\6\z\7\b\p\e\u\s\6\l\n\6\5\r\6\j\x\m\l\5\f\z\1\c\8\b\2\m\h\0\o\p\r\7\7\v\y\8\l\g\6\9\b\2\h\d\y\3\c\2\8\f\9\7\t\y\k\d\o\e\l\i\z\7\e\c\7\h\x\9\a\9\0\3\8\8\g\j\y\7\a\b\d\e\2\f\f\x\y\o\h\g\5\c\v\c\1\2\r\h\v\9\9\8\g\d\w\x\7\t\h\u\d\b\j\5\i\l\f\j\l\u\j\t\p\q\q\u\y\r\e\z\d\h\s\0\w\d\k\2\d\s\w\2\e\3\9\a\1\9\e\t\v\0\4\s\k\g\k\k\w\7\y\q\f\b\i\f\i\t\c\h\2\e\5\3\g\n\j\q\5\6\l\l\x\p\t\i\a\1\1\3\c\8\j\f\v\f\v\z\z\2\4\d\m\q\m\s\e\h\x\w\q\0\e\z\b\3\f\e\j\o\s\r\4\a\q\f\1\1\q\5\k\x\3\k\p\f\7\l\3\n\i\1\m\2\h\7\v\z\s\t\h\o\5\6\x\u\g\c\q\a\4\y\x\q\h\9\5\f\0\n\y\8\p\v\e\7\a\j\u\d\f\l\1\s\y\f\m\s\7\z\4\2\8\c\9\e\5\3\d\1\1\w\2\8\p\5\c\2\3\v\r\o\a\x\g\k\h\f\u\9\t\d\1\h\o\f\o\h\y\1\h\k\o\d\f\f\x\e\a\y\8\2\x\4\f\d\0\0\y\m\o\c\5\6\s\u\v\2\q\i\u\j\y\k\k\5\1\w\n\g\m\q\t\w\9\f\q\8\v\q\9\k\q\t\q\n\z\3\h\0\x\8\0\e\l\0\z\v\0\2\g\o\4\e\5\0\0\3\z\y\4\5\e\o\3\q\m\7\q\5\f\v\w\e\e ]] 00:09:53.147 23:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:53.147 23:10:15 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:53.404 [2024-07-24 23:10:15.667614] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:53.404 [2024-07-24 23:10:15.667770] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63723 ] 00:09:53.404 [2024-07-24 23:10:15.808173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.662 [2024-07-24 23:10:15.928190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.662 [2024-07-24 23:10:15.981378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:53.921  Copying: 512/512 [B] (average 500 kBps) 00:09:53.921 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ zm4al1x9mmfzdx8ggopar7qleix8jtktlguw30y5j148rjqvmdd080m28c8jkeafrnaivtc7f6rdp0hiygvj16z7bpeus6ln65r6jxml5fz1c8b2mh0opr77vy8lg69b2hdy3c28f97tykdoeliz7ec7hx9a90388gjy7abde2ffxyohg5cvc12rhv998gdwx7thudbj5ilfjlujtpqquyrezdhs0wdk2dsw2e39a19etv04skgkkw7yqfbifitch2e53gnjq56llxptia113c8jfvfvzz24dmqmsehxwq0ezb3fejosr4aqf11q5kx3kpf7l3ni1m2h7vzstho56xugcqa4yxqh95f0ny8pve7ajudfl1syfms7z428c9e53d11w28p5c23vroaxgkhfu9td1hofohy1hkodffxeay82x4fd00ymoc56suv2qiujykk51wngmqtw9fq8vq9kqtqnz3h0x80el0zv02go4e5003zy45eo3qm7q5fvwee == \z\m\4\a\l\1\x\9\m\m\f\z\d\x\8\g\g\o\p\a\r\7\q\l\e\i\x\8\j\t\k\t\l\g\u\w\3\0\y\5\j\1\4\8\r\j\q\v\m\d\d\0\8\0\m\2\8\c\8\j\k\e\a\f\r\n\a\i\v\t\c\7\f\6\r\d\p\0\h\i\y\g\v\j\1\6\z\7\b\p\e\u\s\6\l\n\6\5\r\6\j\x\m\l\5\f\z\1\c\8\b\2\m\h\0\o\p\r\7\7\v\y\8\l\g\6\9\b\2\h\d\y\3\c\2\8\f\9\7\t\y\k\d\o\e\l\i\z\7\e\c\7\h\x\9\a\9\0\3\8\8\g\j\y\7\a\b\d\e\2\f\f\x\y\o\h\g\5\c\v\c\1\2\r\h\v\9\9\8\g\d\w\x\7\t\h\u\d\b\j\5\i\l\f\j\l\u\j\t\p\q\q\u\y\r\e\z\d\h\s\0\w\d\k\2\d\s\w\2\e\3\9\a\1\9\e\t\v\0\4\s\k\g\k\k\w\7\y\q\f\b\i\f\i\t\c\h\2\e\5\3\g\n\j\q\5\6\l\l\x\p\t\i\a\1\1\3\c\8\j\f\v\f\v\z\z\2\4\d\m\q\m\s\e\h\x\w\q\0\e\z\b\3\f\e\j\o\s\r\4\a\q\f\1\1\q\5\k\x\3\k\p\f\7\l\3\n\i\1\m\2\h\7\v\z\s\t\h\o\5\6\x\u\g\c\q\a\4\y\x\q\h\9\5\f\0\n\y\8\p\v\e\7\a\j\u\d\f\l\1\s\y\f\m\s\7\z\4\2\8\c\9\e\5\3\d\1\1\w\2\8\p\5\c\2\3\v\r\o\a\x\g\k\h\f\u\9\t\d\1\h\o\f\o\h\y\1\h\k\o\d\f\f\x\e\a\y\8\2\x\4\f\d\0\0\y\m\o\c\5\6\s\u\v\2\q\i\u\j\y\k\k\5\1\w\n\g\m\q\t\w\9\f\q\8\v\q\9\k\q\t\q\n\z\3\h\0\x\8\0\e\l\0\z\v\0\2\g\o\4\e\5\0\0\3\z\y\4\5\e\o\3\q\m\7\q\5\f\v\w\e\e ]] 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:53.922 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:53.922 [2024-07-24 23:10:16.319434] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:53.922 [2024-07-24 23:10:16.319516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63725 ] 00:09:54.183 [2024-07-24 23:10:16.455685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.183 [2024-07-24 23:10:16.584367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.183 [2024-07-24 23:10:16.641189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:54.441  Copying: 512/512 [B] (average 500 kBps) 00:09:54.441 00:09:54.699 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n98i8g61zczg10siccgryrh22e1113b7apljoi3z3piurcptbrgmc0ehfad7vsa6hb3nm66yoqnkwotwnxui6a3n9zj4jkir1bkpmh0zfr7flfqnr4pfh6i1wybikkrmqtwfs7z1lzwz6dgq3yslrd6gx3kifwrftw8qfnulwmnbywn9amr4qqlwbeehg2sugg7jsjc287zbqqr4ixmn6i3rsak16s9ngt1v3dtc40zrjqimizqwiu2hk5f34zo1ghytyhnr9gu51kektjamekc6ghszijyiq11pxvo7luawxvmjw1mv3jyv6z1x0gw6mhdft7k3yg4gqo780anlec94qqpiina08aznsrhjlkyi4yr0uh3gunvi2w7j5ieq2jfio8palp3bzip40cxbljrz0xe12c75q1zvvs5gyocpbnnko2b1lot5w3xnao35wflr4be11h481ip624mpj1u716gx0ykxiws1h8mjbukpl00nn4reosx8ttkqntf1 == \n\9\8\i\8\g\6\1\z\c\z\g\1\0\s\i\c\c\g\r\y\r\h\2\2\e\1\1\1\3\b\7\a\p\l\j\o\i\3\z\3\p\i\u\r\c\p\t\b\r\g\m\c\0\e\h\f\a\d\7\v\s\a\6\h\b\3\n\m\6\6\y\o\q\n\k\w\o\t\w\n\x\u\i\6\a\3\n\9\z\j\4\j\k\i\r\1\b\k\p\m\h\0\z\f\r\7\f\l\f\q\n\r\4\p\f\h\6\i\1\w\y\b\i\k\k\r\m\q\t\w\f\s\7\z\1\l\z\w\z\6\d\g\q\3\y\s\l\r\d\6\g\x\3\k\i\f\w\r\f\t\w\8\q\f\n\u\l\w\m\n\b\y\w\n\9\a\m\r\4\q\q\l\w\b\e\e\h\g\2\s\u\g\g\7\j\s\j\c\2\8\7\z\b\q\q\r\4\i\x\m\n\6\i\3\r\s\a\k\1\6\s\9\n\g\t\1\v\3\d\t\c\4\0\z\r\j\q\i\m\i\z\q\w\i\u\2\h\k\5\f\3\4\z\o\1\g\h\y\t\y\h\n\r\9\g\u\5\1\k\e\k\t\j\a\m\e\k\c\6\g\h\s\z\i\j\y\i\q\1\1\p\x\v\o\7\l\u\a\w\x\v\m\j\w\1\m\v\3\j\y\v\6\z\1\x\0\g\w\6\m\h\d\f\t\7\k\3\y\g\4\g\q\o\7\8\0\a\n\l\e\c\9\4\q\q\p\i\i\n\a\0\8\a\z\n\s\r\h\j\l\k\y\i\4\y\r\0\u\h\3\g\u\n\v\i\2\w\7\j\5\i\e\q\2\j\f\i\o\8\p\a\l\p\3\b\z\i\p\4\0\c\x\b\l\j\r\z\0\x\e\1\2\c\7\5\q\1\z\v\v\s\5\g\y\o\c\p\b\n\n\k\o\2\b\1\l\o\t\5\w\3\x\n\a\o\3\5\w\f\l\r\4\b\e\1\1\h\4\8\1\i\p\6\2\4\m\p\j\1\u\7\1\6\g\x\0\y\k\x\i\w\s\1\h\8\m\j\b\u\k\p\l\0\0\n\n\4\r\e\o\s\x\8\t\t\k\q\n\t\f\1 ]] 00:09:54.699 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:54.699 23:10:16 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:54.699 [2024-07-24 23:10:16.984115] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:54.699 [2024-07-24 23:10:16.984250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63738 ] 00:09:54.699 [2024-07-24 23:10:17.121650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.957 [2024-07-24 23:10:17.239814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.957 [2024-07-24 23:10:17.292716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.215  Copying: 512/512 [B] (average 500 kBps) 00:09:55.215 00:09:55.215 23:10:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n98i8g61zczg10siccgryrh22e1113b7apljoi3z3piurcptbrgmc0ehfad7vsa6hb3nm66yoqnkwotwnxui6a3n9zj4jkir1bkpmh0zfr7flfqnr4pfh6i1wybikkrmqtwfs7z1lzwz6dgq3yslrd6gx3kifwrftw8qfnulwmnbywn9amr4qqlwbeehg2sugg7jsjc287zbqqr4ixmn6i3rsak16s9ngt1v3dtc40zrjqimizqwiu2hk5f34zo1ghytyhnr9gu51kektjamekc6ghszijyiq11pxvo7luawxvmjw1mv3jyv6z1x0gw6mhdft7k3yg4gqo780anlec94qqpiina08aznsrhjlkyi4yr0uh3gunvi2w7j5ieq2jfio8palp3bzip40cxbljrz0xe12c75q1zvvs5gyocpbnnko2b1lot5w3xnao35wflr4be11h481ip624mpj1u716gx0ykxiws1h8mjbukpl00nn4reosx8ttkqntf1 == \n\9\8\i\8\g\6\1\z\c\z\g\1\0\s\i\c\c\g\r\y\r\h\2\2\e\1\1\1\3\b\7\a\p\l\j\o\i\3\z\3\p\i\u\r\c\p\t\b\r\g\m\c\0\e\h\f\a\d\7\v\s\a\6\h\b\3\n\m\6\6\y\o\q\n\k\w\o\t\w\n\x\u\i\6\a\3\n\9\z\j\4\j\k\i\r\1\b\k\p\m\h\0\z\f\r\7\f\l\f\q\n\r\4\p\f\h\6\i\1\w\y\b\i\k\k\r\m\q\t\w\f\s\7\z\1\l\z\w\z\6\d\g\q\3\y\s\l\r\d\6\g\x\3\k\i\f\w\r\f\t\w\8\q\f\n\u\l\w\m\n\b\y\w\n\9\a\m\r\4\q\q\l\w\b\e\e\h\g\2\s\u\g\g\7\j\s\j\c\2\8\7\z\b\q\q\r\4\i\x\m\n\6\i\3\r\s\a\k\1\6\s\9\n\g\t\1\v\3\d\t\c\4\0\z\r\j\q\i\m\i\z\q\w\i\u\2\h\k\5\f\3\4\z\o\1\g\h\y\t\y\h\n\r\9\g\u\5\1\k\e\k\t\j\a\m\e\k\c\6\g\h\s\z\i\j\y\i\q\1\1\p\x\v\o\7\l\u\a\w\x\v\m\j\w\1\m\v\3\j\y\v\6\z\1\x\0\g\w\6\m\h\d\f\t\7\k\3\y\g\4\g\q\o\7\8\0\a\n\l\e\c\9\4\q\q\p\i\i\n\a\0\8\a\z\n\s\r\h\j\l\k\y\i\4\y\r\0\u\h\3\g\u\n\v\i\2\w\7\j\5\i\e\q\2\j\f\i\o\8\p\a\l\p\3\b\z\i\p\4\0\c\x\b\l\j\r\z\0\x\e\1\2\c\7\5\q\1\z\v\v\s\5\g\y\o\c\p\b\n\n\k\o\2\b\1\l\o\t\5\w\3\x\n\a\o\3\5\w\f\l\r\4\b\e\1\1\h\4\8\1\i\p\6\2\4\m\p\j\1\u\7\1\6\g\x\0\y\k\x\i\w\s\1\h\8\m\j\b\u\k\p\l\0\0\n\n\4\r\e\o\s\x\8\t\t\k\q\n\t\f\1 ]] 00:09:55.216 23:10:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:55.216 23:10:17 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:55.216 [2024-07-24 23:10:17.635048] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:55.216 [2024-07-24 23:10:17.635156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63746 ] 00:09:55.474 [2024-07-24 23:10:17.774517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.474 [2024-07-24 23:10:17.886613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.474 [2024-07-24 23:10:17.939486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:55.732  Copying: 512/512 [B] (average 100 kBps) 00:09:55.732 00:09:55.732 23:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n98i8g61zczg10siccgryrh22e1113b7apljoi3z3piurcptbrgmc0ehfad7vsa6hb3nm66yoqnkwotwnxui6a3n9zj4jkir1bkpmh0zfr7flfqnr4pfh6i1wybikkrmqtwfs7z1lzwz6dgq3yslrd6gx3kifwrftw8qfnulwmnbywn9amr4qqlwbeehg2sugg7jsjc287zbqqr4ixmn6i3rsak16s9ngt1v3dtc40zrjqimizqwiu2hk5f34zo1ghytyhnr9gu51kektjamekc6ghszijyiq11pxvo7luawxvmjw1mv3jyv6z1x0gw6mhdft7k3yg4gqo780anlec94qqpiina08aznsrhjlkyi4yr0uh3gunvi2w7j5ieq2jfio8palp3bzip40cxbljrz0xe12c75q1zvvs5gyocpbnnko2b1lot5w3xnao35wflr4be11h481ip624mpj1u716gx0ykxiws1h8mjbukpl00nn4reosx8ttkqntf1 == \n\9\8\i\8\g\6\1\z\c\z\g\1\0\s\i\c\c\g\r\y\r\h\2\2\e\1\1\1\3\b\7\a\p\l\j\o\i\3\z\3\p\i\u\r\c\p\t\b\r\g\m\c\0\e\h\f\a\d\7\v\s\a\6\h\b\3\n\m\6\6\y\o\q\n\k\w\o\t\w\n\x\u\i\6\a\3\n\9\z\j\4\j\k\i\r\1\b\k\p\m\h\0\z\f\r\7\f\l\f\q\n\r\4\p\f\h\6\i\1\w\y\b\i\k\k\r\m\q\t\w\f\s\7\z\1\l\z\w\z\6\d\g\q\3\y\s\l\r\d\6\g\x\3\k\i\f\w\r\f\t\w\8\q\f\n\u\l\w\m\n\b\y\w\n\9\a\m\r\4\q\q\l\w\b\e\e\h\g\2\s\u\g\g\7\j\s\j\c\2\8\7\z\b\q\q\r\4\i\x\m\n\6\i\3\r\s\a\k\1\6\s\9\n\g\t\1\v\3\d\t\c\4\0\z\r\j\q\i\m\i\z\q\w\i\u\2\h\k\5\f\3\4\z\o\1\g\h\y\t\y\h\n\r\9\g\u\5\1\k\e\k\t\j\a\m\e\k\c\6\g\h\s\z\i\j\y\i\q\1\1\p\x\v\o\7\l\u\a\w\x\v\m\j\w\1\m\v\3\j\y\v\6\z\1\x\0\g\w\6\m\h\d\f\t\7\k\3\y\g\4\g\q\o\7\8\0\a\n\l\e\c\9\4\q\q\p\i\i\n\a\0\8\a\z\n\s\r\h\j\l\k\y\i\4\y\r\0\u\h\3\g\u\n\v\i\2\w\7\j\5\i\e\q\2\j\f\i\o\8\p\a\l\p\3\b\z\i\p\4\0\c\x\b\l\j\r\z\0\x\e\1\2\c\7\5\q\1\z\v\v\s\5\g\y\o\c\p\b\n\n\k\o\2\b\1\l\o\t\5\w\3\x\n\a\o\3\5\w\f\l\r\4\b\e\1\1\h\4\8\1\i\p\6\2\4\m\p\j\1\u\7\1\6\g\x\0\y\k\x\i\w\s\1\h\8\m\j\b\u\k\p\l\0\0\n\n\4\r\e\o\s\x\8\t\t\k\q\n\t\f\1 ]] 00:09:55.732 23:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:55.732 23:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:55.990 [2024-07-24 23:10:18.263775] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:55.990 [2024-07-24 23:10:18.263881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63753 ] 00:09:55.990 [2024-07-24 23:10:18.401225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.248 [2024-07-24 23:10:18.517906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.248 [2024-07-24 23:10:18.573271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:56.506  Copying: 512/512 [B] (average 71 kBps) 00:09:56.506 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ n98i8g61zczg10siccgryrh22e1113b7apljoi3z3piurcptbrgmc0ehfad7vsa6hb3nm66yoqnkwotwnxui6a3n9zj4jkir1bkpmh0zfr7flfqnr4pfh6i1wybikkrmqtwfs7z1lzwz6dgq3yslrd6gx3kifwrftw8qfnulwmnbywn9amr4qqlwbeehg2sugg7jsjc287zbqqr4ixmn6i3rsak16s9ngt1v3dtc40zrjqimizqwiu2hk5f34zo1ghytyhnr9gu51kektjamekc6ghszijyiq11pxvo7luawxvmjw1mv3jyv6z1x0gw6mhdft7k3yg4gqo780anlec94qqpiina08aznsrhjlkyi4yr0uh3gunvi2w7j5ieq2jfio8palp3bzip40cxbljrz0xe12c75q1zvvs5gyocpbnnko2b1lot5w3xnao35wflr4be11h481ip624mpj1u716gx0ykxiws1h8mjbukpl00nn4reosx8ttkqntf1 == \n\9\8\i\8\g\6\1\z\c\z\g\1\0\s\i\c\c\g\r\y\r\h\2\2\e\1\1\1\3\b\7\a\p\l\j\o\i\3\z\3\p\i\u\r\c\p\t\b\r\g\m\c\0\e\h\f\a\d\7\v\s\a\6\h\b\3\n\m\6\6\y\o\q\n\k\w\o\t\w\n\x\u\i\6\a\3\n\9\z\j\4\j\k\i\r\1\b\k\p\m\h\0\z\f\r\7\f\l\f\q\n\r\4\p\f\h\6\i\1\w\y\b\i\k\k\r\m\q\t\w\f\s\7\z\1\l\z\w\z\6\d\g\q\3\y\s\l\r\d\6\g\x\3\k\i\f\w\r\f\t\w\8\q\f\n\u\l\w\m\n\b\y\w\n\9\a\m\r\4\q\q\l\w\b\e\e\h\g\2\s\u\g\g\7\j\s\j\c\2\8\7\z\b\q\q\r\4\i\x\m\n\6\i\3\r\s\a\k\1\6\s\9\n\g\t\1\v\3\d\t\c\4\0\z\r\j\q\i\m\i\z\q\w\i\u\2\h\k\5\f\3\4\z\o\1\g\h\y\t\y\h\n\r\9\g\u\5\1\k\e\k\t\j\a\m\e\k\c\6\g\h\s\z\i\j\y\i\q\1\1\p\x\v\o\7\l\u\a\w\x\v\m\j\w\1\m\v\3\j\y\v\6\z\1\x\0\g\w\6\m\h\d\f\t\7\k\3\y\g\4\g\q\o\7\8\0\a\n\l\e\c\9\4\q\q\p\i\i\n\a\0\8\a\z\n\s\r\h\j\l\k\y\i\4\y\r\0\u\h\3\g\u\n\v\i\2\w\7\j\5\i\e\q\2\j\f\i\o\8\p\a\l\p\3\b\z\i\p\4\0\c\x\b\l\j\r\z\0\x\e\1\2\c\7\5\q\1\z\v\v\s\5\g\y\o\c\p\b\n\n\k\o\2\b\1\l\o\t\5\w\3\x\n\a\o\3\5\w\f\l\r\4\b\e\1\1\h\4\8\1\i\p\6\2\4\m\p\j\1\u\7\1\6\g\x\0\y\k\x\i\w\s\1\h\8\m\j\b\u\k\p\l\0\0\n\n\4\r\e\o\s\x\8\t\t\k\q\n\t\f\1 ]] 00:09:56.506 00:09:56.506 real 0m5.253s 00:09:56.506 user 0m3.057s 00:09:56.506 sys 0m1.199s 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:56.506 ************************************ 00:09:56.506 END TEST dd_flags_misc_forced_aio 00:09:56.506 ************************************ 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:56.506 00:09:56.506 real 0m23.166s 00:09:56.506 user 0m12.271s 00:09:56.506 sys 0m6.761s 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.506 23:10:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:56.506 ************************************ 00:09:56.506 END TEST spdk_dd_posix 00:09:56.506 ************************************ 00:09:56.506 23:10:18 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:09:56.507 23:10:18 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:56.507 23:10:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.507 23:10:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.507 23:10:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:56.507 ************************************ 00:09:56.507 START TEST spdk_dd_malloc 00:09:56.507 ************************************ 00:09:56.507 23:10:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:56.765 * Looking for test storage... 00:09:56.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:56.765 ************************************ 00:09:56.765 START TEST dd_malloc_copy 00:09:56.765 ************************************ 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:56.765 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:56.766 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:56.766 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:56.766 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:56.766 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:56.766 23:10:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:56.766 [2024-07-24 23:10:19.107246] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:09:56.766 [2024-07-24 23:10:19.107350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63827 ] 00:09:56.766 { 00:09:56.766 "subsystems": [ 00:09:56.766 { 00:09:56.766 "subsystem": "bdev", 00:09:56.766 "config": [ 00:09:56.766 { 00:09:56.766 "params": { 00:09:56.766 "block_size": 512, 00:09:56.766 "num_blocks": 1048576, 00:09:56.766 "name": "malloc0" 00:09:56.766 }, 00:09:56.766 "method": "bdev_malloc_create" 00:09:56.766 }, 00:09:56.766 { 00:09:56.766 "params": { 00:09:56.766 "block_size": 512, 00:09:56.766 "num_blocks": 1048576, 00:09:56.766 "name": "malloc1" 00:09:56.766 }, 00:09:56.766 "method": "bdev_malloc_create" 00:09:56.766 }, 00:09:56.766 { 00:09:56.766 "method": "bdev_wait_for_examine" 00:09:56.766 } 00:09:56.766 ] 00:09:56.766 } 00:09:56.766 ] 00:09:56.766 } 00:09:56.766 [2024-07-24 23:10:19.239010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.023 [2024-07-24 23:10:19.360269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.023 [2024-07-24 23:10:19.415239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.833  Copying: 188/512 [MB] (188 MBps) Copying: 367/512 [MB] (179 MBps) Copying: 512/512 [MB] (average 188 MBps) 00:10:00.833 00:10:00.833 23:10:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:10:00.833 23:10:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:10:00.833 23:10:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:00.833 23:10:23 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 { 00:10:00.833 "subsystems": [ 00:10:00.833 { 00:10:00.833 "subsystem": "bdev", 00:10:00.833 "config": [ 00:10:00.833 { 00:10:00.833 "params": { 00:10:00.833 "block_size": 512, 00:10:00.833 "num_blocks": 1048576, 00:10:00.833 "name": "malloc0" 00:10:00.833 }, 00:10:00.833 "method": "bdev_malloc_create" 00:10:00.833 }, 00:10:00.833 { 00:10:00.833 "params": { 00:10:00.833 "block_size": 512, 00:10:00.833 "num_blocks": 1048576, 00:10:00.833 "name": "malloc1" 00:10:00.833 }, 00:10:00.833 "method": "bdev_malloc_create" 00:10:00.833 }, 00:10:00.833 { 00:10:00.833 "method": "bdev_wait_for_examine" 00:10:00.833 } 00:10:00.833 ] 00:10:00.833 } 00:10:00.833 ] 00:10:00.833 } 00:10:00.833 [2024-07-24 23:10:23.175609] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:00.833 [2024-07-24 23:10:23.175706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63880 ] 00:10:00.833 [2024-07-24 23:10:23.315828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.090 [2024-07-24 23:10:23.444181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.090 [2024-07-24 23:10:23.502521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:04.955  Copying: 197/512 [MB] (197 MBps) Copying: 389/512 [MB] (192 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:10:04.955 00:10:04.955 ************************************ 00:10:04.955 END TEST dd_malloc_copy 00:10:04.955 ************************************ 00:10:04.955 00:10:04.955 real 0m8.037s 00:10:04.955 user 0m6.994s 00:10:04.955 sys 0m0.864s 00:10:04.955 23:10:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.955 23:10:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:10:04.955 23:10:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:10:04.955 00:10:04.955 real 0m8.171s 00:10:04.955 user 0m7.055s 00:10:04.955 sys 0m0.940s 00:10:04.955 23:10:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.955 23:10:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:10:04.955 ************************************ 00:10:04.955 END TEST spdk_dd_malloc 00:10:04.955 ************************************ 00:10:04.955 23:10:27 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:04.955 23:10:27 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:04.955 23:10:27 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:04.955 23:10:27 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.955 23:10:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:04.955 ************************************ 00:10:04.955 START TEST spdk_dd_bdev_to_bdev 00:10:04.955 ************************************ 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:10:04.955 * Looking for test storage... 00:10:04.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:04.955 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:10:04.956 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:10:04.956 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:04.956 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:04.956 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.956 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:04.956 ************************************ 00:10:04.956 START TEST dd_inflate_file 00:10:04.956 ************************************ 00:10:04.956 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:10:04.956 [2024-07-24 23:10:27.317850] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:04.956 [2024-07-24 23:10:27.317962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63990 ] 00:10:05.242 [2024-07-24 23:10:27.457013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.242 [2024-07-24 23:10:27.587018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.242 [2024-07-24 23:10:27.643292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:05.500  Copying: 64/64 [MB] (average 1422 MBps) 00:10:05.500 00:10:05.500 00:10:05.500 real 0m0.679s 00:10:05.500 user 0m0.416s 00:10:05.500 sys 0m0.319s 00:10:05.500 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:05.500 ************************************ 00:10:05.500 END TEST dd_inflate_file 00:10:05.500 ************************************ 00:10:05.500 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:05.758 23:10:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:05.758 ************************************ 00:10:05.758 START TEST dd_copy_to_out_bdev 00:10:05.758 ************************************ 00:10:05.759 23:10:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:10:05.759 { 00:10:05.759 "subsystems": [ 00:10:05.759 { 00:10:05.759 "subsystem": "bdev", 00:10:05.759 "config": [ 00:10:05.759 { 00:10:05.759 "params": { 00:10:05.759 "trtype": "pcie", 00:10:05.759 "traddr": "0000:00:10.0", 00:10:05.759 "name": "Nvme0" 00:10:05.759 }, 00:10:05.759 "method": "bdev_nvme_attach_controller" 00:10:05.759 }, 00:10:05.759 { 00:10:05.759 "params": { 00:10:05.759 "trtype": "pcie", 00:10:05.759 "traddr": "0000:00:11.0", 00:10:05.759 "name": "Nvme1" 00:10:05.759 }, 00:10:05.759 "method": "bdev_nvme_attach_controller" 00:10:05.759 }, 00:10:05.759 { 00:10:05.759 "method": "bdev_wait_for_examine" 00:10:05.759 } 00:10:05.759 ] 00:10:05.759 } 00:10:05.759 ] 00:10:05.759 } 00:10:05.759 [2024-07-24 23:10:28.056793] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:05.759 [2024-07-24 23:10:28.056881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64024 ] 00:10:05.759 [2024-07-24 23:10:28.199510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.016 [2024-07-24 23:10:28.314872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.016 [2024-07-24 23:10:28.369057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:07.648  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 57 MBps) 00:10:07.648 00:10:07.648 00:10:07.648 real 0m1.954s 00:10:07.648 user 0m1.714s 00:10:07.648 sys 0m1.507s 00:10:07.648 23:10:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.648 23:10:29 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 ************************************ 00:10:07.648 END TEST dd_copy_to_out_bdev 00:10:07.648 ************************************ 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 ************************************ 00:10:07.648 START TEST dd_offset_magic 00:10:07.648 ************************************ 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:07.648 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:07.648 { 00:10:07.648 "subsystems": [ 00:10:07.648 { 00:10:07.648 "subsystem": "bdev", 00:10:07.648 "config": [ 00:10:07.648 { 00:10:07.648 "params": { 00:10:07.648 "trtype": "pcie", 00:10:07.648 "traddr": "0000:00:10.0", 00:10:07.648 "name": "Nvme0" 00:10:07.648 }, 00:10:07.648 "method": "bdev_nvme_attach_controller" 00:10:07.648 }, 00:10:07.648 { 00:10:07.648 "params": { 00:10:07.648 "trtype": "pcie", 00:10:07.648 "traddr": "0000:00:11.0", 00:10:07.648 "name": "Nvme1" 00:10:07.648 }, 00:10:07.648 "method": "bdev_nvme_attach_controller" 00:10:07.648 }, 00:10:07.648 { 00:10:07.648 "method": "bdev_wait_for_examine" 00:10:07.648 } 00:10:07.648 ] 00:10:07.648 } 00:10:07.648 ] 00:10:07.648 } 00:10:07.648 [2024-07-24 23:10:30.072965] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:07.648 [2024-07-24 23:10:30.073060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64069 ] 00:10:07.907 [2024-07-24 23:10:30.211907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.907 [2024-07-24 23:10:30.331174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.907 [2024-07-24 23:10:30.385602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:08.423  Copying: 65/65 [MB] (average 942 MBps) 00:10:08.423 00:10:08.423 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:08.423 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:10:08.423 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:08.423 23:10:30 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:08.681 [2024-07-24 23:10:30.934288] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:08.681 [2024-07-24 23:10:30.934387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64089 ] 00:10:08.681 { 00:10:08.681 "subsystems": [ 00:10:08.681 { 00:10:08.681 "subsystem": "bdev", 00:10:08.681 "config": [ 00:10:08.681 { 00:10:08.681 "params": { 00:10:08.681 "trtype": "pcie", 00:10:08.681 "traddr": "0000:00:10.0", 00:10:08.681 "name": "Nvme0" 00:10:08.681 }, 00:10:08.681 "method": "bdev_nvme_attach_controller" 00:10:08.681 }, 00:10:08.681 { 00:10:08.681 "params": { 00:10:08.681 "trtype": "pcie", 00:10:08.681 "traddr": "0000:00:11.0", 00:10:08.681 "name": "Nvme1" 00:10:08.681 }, 00:10:08.681 "method": "bdev_nvme_attach_controller" 00:10:08.681 }, 00:10:08.681 { 00:10:08.681 "method": "bdev_wait_for_examine" 00:10:08.681 } 00:10:08.681 ] 00:10:08.681 } 00:10:08.681 ] 00:10:08.681 } 00:10:08.681 [2024-07-24 23:10:31.072935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.939 [2024-07-24 23:10:31.191892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.939 [2024-07-24 23:10:31.245365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:09.197  Copying: 1024/1024 [kB] (average 1000 MBps) 00:10:09.197 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:09.197 23:10:31 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:09.455 [2024-07-24 23:10:31.692751] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:09.455 [2024-07-24 23:10:31.692850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64111 ] 00:10:09.455 { 00:10:09.455 "subsystems": [ 00:10:09.455 { 00:10:09.455 "subsystem": "bdev", 00:10:09.455 "config": [ 00:10:09.455 { 00:10:09.455 "params": { 00:10:09.455 "trtype": "pcie", 00:10:09.455 "traddr": "0000:00:10.0", 00:10:09.455 "name": "Nvme0" 00:10:09.455 }, 00:10:09.455 "method": "bdev_nvme_attach_controller" 00:10:09.455 }, 00:10:09.455 { 00:10:09.455 "params": { 00:10:09.455 "trtype": "pcie", 00:10:09.455 "traddr": "0000:00:11.0", 00:10:09.455 "name": "Nvme1" 00:10:09.455 }, 00:10:09.455 "method": "bdev_nvme_attach_controller" 00:10:09.455 }, 00:10:09.455 { 00:10:09.455 "method": "bdev_wait_for_examine" 00:10:09.455 } 00:10:09.455 ] 00:10:09.455 } 00:10:09.455 ] 00:10:09.455 } 00:10:09.455 [2024-07-24 23:10:31.827411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.714 [2024-07-24 23:10:31.957168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.714 [2024-07-24 23:10:32.019533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.229  Copying: 65/65 [MB] (average 1065 MBps) 00:10:10.229 00:10:10.229 23:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:10:10.229 23:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:10:10.229 23:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:10:10.229 23:10:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:10.229 [2024-07-24 23:10:32.556761] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:10.229 [2024-07-24 23:10:32.556847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64126 ] 00:10:10.229 { 00:10:10.229 "subsystems": [ 00:10:10.229 { 00:10:10.229 "subsystem": "bdev", 00:10:10.229 "config": [ 00:10:10.229 { 00:10:10.229 "params": { 00:10:10.229 "trtype": "pcie", 00:10:10.229 "traddr": "0000:00:10.0", 00:10:10.229 "name": "Nvme0" 00:10:10.229 }, 00:10:10.229 "method": "bdev_nvme_attach_controller" 00:10:10.229 }, 00:10:10.229 { 00:10:10.229 "params": { 00:10:10.229 "trtype": "pcie", 00:10:10.229 "traddr": "0000:00:11.0", 00:10:10.229 "name": "Nvme1" 00:10:10.229 }, 00:10:10.229 "method": "bdev_nvme_attach_controller" 00:10:10.229 }, 00:10:10.229 { 00:10:10.229 "method": "bdev_wait_for_examine" 00:10:10.229 } 00:10:10.229 ] 00:10:10.229 } 00:10:10.229 ] 00:10:10.229 } 00:10:10.229 [2024-07-24 23:10:32.691415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.486 [2024-07-24 23:10:32.804708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.487 [2024-07-24 23:10:32.857684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.002  Copying: 1024/1024 [kB] (average 500 MBps) 00:10:11.002 00:10:11.002 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:10:11.003 00:10:11.003 real 0m3.248s 00:10:11.003 user 0m2.421s 00:10:11.003 sys 0m0.899s 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:10:11.003 ************************************ 00:10:11.003 END TEST dd_offset_magic 00:10:11.003 ************************************ 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:11.003 23:10:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:11.003 [2024-07-24 23:10:33.358514] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:11.003 [2024-07-24 23:10:33.358624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64159 ] 00:10:11.003 { 00:10:11.003 "subsystems": [ 00:10:11.003 { 00:10:11.003 "subsystem": "bdev", 00:10:11.003 "config": [ 00:10:11.003 { 00:10:11.003 "params": { 00:10:11.003 "trtype": "pcie", 00:10:11.003 "traddr": "0000:00:10.0", 00:10:11.003 "name": "Nvme0" 00:10:11.003 }, 00:10:11.003 "method": "bdev_nvme_attach_controller" 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "params": { 00:10:11.003 "trtype": "pcie", 00:10:11.003 "traddr": "0000:00:11.0", 00:10:11.003 "name": "Nvme1" 00:10:11.003 }, 00:10:11.003 "method": "bdev_nvme_attach_controller" 00:10:11.003 }, 00:10:11.003 { 00:10:11.003 "method": "bdev_wait_for_examine" 00:10:11.003 } 00:10:11.003 ] 00:10:11.003 } 00:10:11.003 ] 00:10:11.003 } 00:10:11.261 [2024-07-24 23:10:33.498045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.261 [2024-07-24 23:10:33.621172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.261 [2024-07-24 23:10:33.674789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.776  Copying: 5120/5120 [kB] (average 1000 MBps) 00:10:11.776 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:11.776 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:11.776 [2024-07-24 23:10:34.134840] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:11.776 [2024-07-24 23:10:34.135369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64180 ] 00:10:11.776 { 00:10:11.776 "subsystems": [ 00:10:11.776 { 00:10:11.776 "subsystem": "bdev", 00:10:11.776 "config": [ 00:10:11.776 { 00:10:11.776 "params": { 00:10:11.776 "trtype": "pcie", 00:10:11.776 "traddr": "0000:00:10.0", 00:10:11.776 "name": "Nvme0" 00:10:11.776 }, 00:10:11.776 "method": "bdev_nvme_attach_controller" 00:10:11.776 }, 00:10:11.776 { 00:10:11.776 "params": { 00:10:11.776 "trtype": "pcie", 00:10:11.776 "traddr": "0000:00:11.0", 00:10:11.776 "name": "Nvme1" 00:10:11.776 }, 00:10:11.776 "method": "bdev_nvme_attach_controller" 00:10:11.776 }, 00:10:11.776 { 00:10:11.776 "method": "bdev_wait_for_examine" 00:10:11.776 } 00:10:11.776 ] 00:10:11.776 } 00:10:11.776 ] 00:10:11.776 } 00:10:12.034 [2024-07-24 23:10:34.271402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.034 [2024-07-24 23:10:34.389733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.034 [2024-07-24 23:10:34.443958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:12.551  Copying: 5120/5120 [kB] (average 1000 MBps) 00:10:12.551 00:10:12.551 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:10:12.551 00:10:12.551 real 0m7.705s 00:10:12.551 user 0m5.775s 00:10:12.551 sys 0m3.435s 00:10:12.551 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.551 23:10:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:12.551 ************************************ 00:10:12.551 END TEST spdk_dd_bdev_to_bdev 00:10:12.551 ************************************ 00:10:12.551 23:10:34 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:12.551 23:10:34 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:10:12.551 23:10:34 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:12.551 23:10:34 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:12.551 23:10:34 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.551 23:10:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:12.551 ************************************ 00:10:12.551 START TEST spdk_dd_uring 00:10:12.551 ************************************ 00:10:12.551 23:10:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:10:12.551 * Looking for test storage... 00:10:12.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:12.551 ************************************ 00:10:12.551 START TEST dd_uring_copy 00:10:12.551 ************************************ 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:10:12.551 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=699x9gk0ftef18wq61esoyg6tkajysaq2k5yofck00te99jo0s2q92kml5w40zj4adtgevh3ahrsokm8jubz7onaitkvu3ak2t662ab1bgkx0nb4eufl5njgy9tft4pgqyfu283upukukkotexmzqh87qzukgoneeiq61noi7suzqw17abpssisw2s625p0qjqvk42odxhg3ckt7z9bsjct5c51q6mvq0qxxy4jnqqmjlrobphtnxh4mtzdfb4gnotjwfz7h7ymae9qsc048cdyxt6celdv1o1j7p3h387znkwlzml0pay17ciy2f0l2bi93j8r1dup8gc7udtr8aaa7hv2d0mkzyuews135vdtqo3v1zd37cifcfcqv9od9jgaiwzgoiz3dbyanxw8jnjx47xis35m2sgcbtnh7aon2la5a35l9o1ukdhw4lzpx35jzs1dd4q100tohc4436lgbrw78qhbesmq6lt0dk13k0lvmo23v0hv9f1ww6eavdmsqhg7n7uqshtsljjpta9ff8n28aypwgldd752ai9xcb9js6vmftrgu4ktful0zcjk36susrb6dhyz4oz1prj6dr5vmgc7ucod9ioandlpybtk3owf4vbuho1sd9pdfu1aqzx94imcgqpms57y3s02jmm9cyn0606yujoiro5agtyrnrqfnriaixmsj0rk04f8hegjfmo03x448j6sgdl8fc3jjfmrr5a83leabe6x4qi4m18k814kbvir0y53l4xowuq70nm23wlw7flklp19h1v6tckwj8z57shce1lntaio90p16p1xon7l5ek1e6najdpjz3x8w1um4tjncw8xx4dfwagov1v3famx5fhobrif1wro8nvx6o79uwgjcyji9yhgjvqcy8a5r7ks6y8nhs2zw3e0h9hobww4nbjevowvjkx8rg236g0jpv79cu439kmd674cu7fqlp9tm91bbk7ro1xgblnxeu8nrqokc18v6j9ixmp8by3bylrkp 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 699x9gk0ftef18wq61esoyg6tkajysaq2k5yofck00te99jo0s2q92kml5w40zj4adtgevh3ahrsokm8jubz7onaitkvu3ak2t662ab1bgkx0nb4eufl5njgy9tft4pgqyfu283upukukkotexmzqh87qzukgoneeiq61noi7suzqw17abpssisw2s625p0qjqvk42odxhg3ckt7z9bsjct5c51q6mvq0qxxy4jnqqmjlrobphtnxh4mtzdfb4gnotjwfz7h7ymae9qsc048cdyxt6celdv1o1j7p3h387znkwlzml0pay17ciy2f0l2bi93j8r1dup8gc7udtr8aaa7hv2d0mkzyuews135vdtqo3v1zd37cifcfcqv9od9jgaiwzgoiz3dbyanxw8jnjx47xis35m2sgcbtnh7aon2la5a35l9o1ukdhw4lzpx35jzs1dd4q100tohc4436lgbrw78qhbesmq6lt0dk13k0lvmo23v0hv9f1ww6eavdmsqhg7n7uqshtsljjpta9ff8n28aypwgldd752ai9xcb9js6vmftrgu4ktful0zcjk36susrb6dhyz4oz1prj6dr5vmgc7ucod9ioandlpybtk3owf4vbuho1sd9pdfu1aqzx94imcgqpms57y3s02jmm9cyn0606yujoiro5agtyrnrqfnriaixmsj0rk04f8hegjfmo03x448j6sgdl8fc3jjfmrr5a83leabe6x4qi4m18k814kbvir0y53l4xowuq70nm23wlw7flklp19h1v6tckwj8z57shce1lntaio90p16p1xon7l5ek1e6najdpjz3x8w1um4tjncw8xx4dfwagov1v3famx5fhobrif1wro8nvx6o79uwgjcyji9yhgjvqcy8a5r7ks6y8nhs2zw3e0h9hobww4nbjevowvjkx8rg236g0jpv79cu439kmd674cu7fqlp9tm91bbk7ro1xgblnxeu8nrqokc18v6j9ixmp8by3bylrkp 00:10:12.810 23:10:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:10:12.810 [2024-07-24 23:10:35.108711] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:12.810 [2024-07-24 23:10:35.108810] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64250 ] 00:10:12.810 [2024-07-24 23:10:35.245541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.068 [2024-07-24 23:10:35.372451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.068 [2024-07-24 23:10:35.430144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:14.200  Copying: 511/511 [MB] (average 1402 MBps) 00:10:14.200 00:10:14.200 23:10:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:10:14.200 23:10:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:10:14.200 23:10:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:14.200 23:10:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:14.200 [2024-07-24 23:10:36.481455] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:14.200 [2024-07-24 23:10:36.481548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64266 ] 00:10:14.200 { 00:10:14.200 "subsystems": [ 00:10:14.200 { 00:10:14.200 "subsystem": "bdev", 00:10:14.200 "config": [ 00:10:14.200 { 00:10:14.200 "params": { 00:10:14.200 "block_size": 512, 00:10:14.200 "num_blocks": 1048576, 00:10:14.200 "name": "malloc0" 00:10:14.200 }, 00:10:14.200 "method": "bdev_malloc_create" 00:10:14.201 }, 00:10:14.201 { 00:10:14.201 "params": { 00:10:14.201 "filename": "/dev/zram1", 00:10:14.201 "name": "uring0" 00:10:14.201 }, 00:10:14.201 "method": "bdev_uring_create" 00:10:14.201 }, 00:10:14.201 { 00:10:14.201 "method": "bdev_wait_for_examine" 00:10:14.201 } 00:10:14.201 ] 00:10:14.201 } 00:10:14.201 ] 00:10:14.201 } 00:10:14.201 [2024-07-24 23:10:36.618640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.458 [2024-07-24 23:10:36.736850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.458 [2024-07-24 23:10:36.791029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:17.332  Copying: 216/512 [MB] (216 MBps) Copying: 434/512 [MB] (218 MBps) Copying: 512/512 [MB] (average 217 MBps) 00:10:17.332 00:10:17.332 23:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:10:17.332 23:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:10:17.332 23:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:17.332 23:10:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:17.332 [2024-07-24 23:10:39.793037] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:17.332 [2024-07-24 23:10:39.793124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64310 ] 00:10:17.332 { 00:10:17.332 "subsystems": [ 00:10:17.332 { 00:10:17.332 "subsystem": "bdev", 00:10:17.332 "config": [ 00:10:17.332 { 00:10:17.332 "params": { 00:10:17.332 "block_size": 512, 00:10:17.332 "num_blocks": 1048576, 00:10:17.332 "name": "malloc0" 00:10:17.332 }, 00:10:17.332 "method": "bdev_malloc_create" 00:10:17.332 }, 00:10:17.332 { 00:10:17.332 "params": { 00:10:17.332 "filename": "/dev/zram1", 00:10:17.332 "name": "uring0" 00:10:17.332 }, 00:10:17.332 "method": "bdev_uring_create" 00:10:17.332 }, 00:10:17.332 { 00:10:17.332 "method": "bdev_wait_for_examine" 00:10:17.332 } 00:10:17.332 ] 00:10:17.332 } 00:10:17.332 ] 00:10:17.332 } 00:10:17.590 [2024-07-24 23:10:39.927429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.590 [2024-07-24 23:10:40.039389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.848 [2024-07-24 23:10:40.092240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:21.464  Copying: 182/512 [MB] (182 MBps) Copying: 346/512 [MB] (163 MBps) Copying: 496/512 [MB] (149 MBps) Copying: 512/512 [MB] (average 165 MBps) 00:10:21.464 00:10:21.464 23:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:10:21.464 23:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 699x9gk0ftef18wq61esoyg6tkajysaq2k5yofck00te99jo0s2q92kml5w40zj4adtgevh3ahrsokm8jubz7onaitkvu3ak2t662ab1bgkx0nb4eufl5njgy9tft4pgqyfu283upukukkotexmzqh87qzukgoneeiq61noi7suzqw17abpssisw2s625p0qjqvk42odxhg3ckt7z9bsjct5c51q6mvq0qxxy4jnqqmjlrobphtnxh4mtzdfb4gnotjwfz7h7ymae9qsc048cdyxt6celdv1o1j7p3h387znkwlzml0pay17ciy2f0l2bi93j8r1dup8gc7udtr8aaa7hv2d0mkzyuews135vdtqo3v1zd37cifcfcqv9od9jgaiwzgoiz3dbyanxw8jnjx47xis35m2sgcbtnh7aon2la5a35l9o1ukdhw4lzpx35jzs1dd4q100tohc4436lgbrw78qhbesmq6lt0dk13k0lvmo23v0hv9f1ww6eavdmsqhg7n7uqshtsljjpta9ff8n28aypwgldd752ai9xcb9js6vmftrgu4ktful0zcjk36susrb6dhyz4oz1prj6dr5vmgc7ucod9ioandlpybtk3owf4vbuho1sd9pdfu1aqzx94imcgqpms57y3s02jmm9cyn0606yujoiro5agtyrnrqfnriaixmsj0rk04f8hegjfmo03x448j6sgdl8fc3jjfmrr5a83leabe6x4qi4m18k814kbvir0y53l4xowuq70nm23wlw7flklp19h1v6tckwj8z57shce1lntaio90p16p1xon7l5ek1e6najdpjz3x8w1um4tjncw8xx4dfwagov1v3famx5fhobrif1wro8nvx6o79uwgjcyji9yhgjvqcy8a5r7ks6y8nhs2zw3e0h9hobww4nbjevowvjkx8rg236g0jpv79cu439kmd674cu7fqlp9tm91bbk7ro1xgblnxeu8nrqokc18v6j9ixmp8by3bylrkp == \6\9\9\x\9\g\k\0\f\t\e\f\1\8\w\q\6\1\e\s\o\y\g\6\t\k\a\j\y\s\a\q\2\k\5\y\o\f\c\k\0\0\t\e\9\9\j\o\0\s\2\q\9\2\k\m\l\5\w\4\0\z\j\4\a\d\t\g\e\v\h\3\a\h\r\s\o\k\m\8\j\u\b\z\7\o\n\a\i\t\k\v\u\3\a\k\2\t\6\6\2\a\b\1\b\g\k\x\0\n\b\4\e\u\f\l\5\n\j\g\y\9\t\f\t\4\p\g\q\y\f\u\2\8\3\u\p\u\k\u\k\k\o\t\e\x\m\z\q\h\8\7\q\z\u\k\g\o\n\e\e\i\q\6\1\n\o\i\7\s\u\z\q\w\1\7\a\b\p\s\s\i\s\w\2\s\6\2\5\p\0\q\j\q\v\k\4\2\o\d\x\h\g\3\c\k\t\7\z\9\b\s\j\c\t\5\c\5\1\q\6\m\v\q\0\q\x\x\y\4\j\n\q\q\m\j\l\r\o\b\p\h\t\n\x\h\4\m\t\z\d\f\b\4\g\n\o\t\j\w\f\z\7\h\7\y\m\a\e\9\q\s\c\0\4\8\c\d\y\x\t\6\c\e\l\d\v\1\o\1\j\7\p\3\h\3\8\7\z\n\k\w\l\z\m\l\0\p\a\y\1\7\c\i\y\2\f\0\l\2\b\i\9\3\j\8\r\1\d\u\p\8\g\c\7\u\d\t\r\8\a\a\a\7\h\v\2\d\0\m\k\z\y\u\e\w\s\1\3\5\v\d\t\q\o\3\v\1\z\d\3\7\c\i\f\c\f\c\q\v\9\o\d\9\j\g\a\i\w\z\g\o\i\z\3\d\b\y\a\n\x\w\8\j\n\j\x\4\7\x\i\s\3\5\m\2\s\g\c\b\t\n\h\7\a\o\n\2\l\a\5\a\3\5\l\9\o\1\u\k\d\h\w\4\l\z\p\x\3\5\j\z\s\1\d\d\4\q\1\0\0\t\o\h\c\4\4\3\6\l\g\b\r\w\7\8\q\h\b\e\s\m\q\6\l\t\0\d\k\1\3\k\0\l\v\m\o\2\3\v\0\h\v\9\f\1\w\w\6\e\a\v\d\m\s\q\h\g\7\n\7\u\q\s\h\t\s\l\j\j\p\t\a\9\f\f\8\n\2\8\a\y\p\w\g\l\d\d\7\5\2\a\i\9\x\c\b\9\j\s\6\v\m\f\t\r\g\u\4\k\t\f\u\l\0\z\c\j\k\3\6\s\u\s\r\b\6\d\h\y\z\4\o\z\1\p\r\j\6\d\r\5\v\m\g\c\7\u\c\o\d\9\i\o\a\n\d\l\p\y\b\t\k\3\o\w\f\4\v\b\u\h\o\1\s\d\9\p\d\f\u\1\a\q\z\x\9\4\i\m\c\g\q\p\m\s\5\7\y\3\s\0\2\j\m\m\9\c\y\n\0\6\0\6\y\u\j\o\i\r\o\5\a\g\t\y\r\n\r\q\f\n\r\i\a\i\x\m\s\j\0\r\k\0\4\f\8\h\e\g\j\f\m\o\0\3\x\4\4\8\j\6\s\g\d\l\8\f\c\3\j\j\f\m\r\r\5\a\8\3\l\e\a\b\e\6\x\4\q\i\4\m\1\8\k\8\1\4\k\b\v\i\r\0\y\5\3\l\4\x\o\w\u\q\7\0\n\m\2\3\w\l\w\7\f\l\k\l\p\1\9\h\1\v\6\t\c\k\w\j\8\z\5\7\s\h\c\e\1\l\n\t\a\i\o\9\0\p\1\6\p\1\x\o\n\7\l\5\e\k\1\e\6\n\a\j\d\p\j\z\3\x\8\w\1\u\m\4\t\j\n\c\w\8\x\x\4\d\f\w\a\g\o\v\1\v\3\f\a\m\x\5\f\h\o\b\r\i\f\1\w\r\o\8\n\v\x\6\o\7\9\u\w\g\j\c\y\j\i\9\y\h\g\j\v\q\c\y\8\a\5\r\7\k\s\6\y\8\n\h\s\2\z\w\3\e\0\h\9\h\o\b\w\w\4\n\b\j\e\v\o\w\v\j\k\x\8\r\g\2\3\6\g\0\j\p\v\7\9\c\u\4\3\9\k\m\d\6\7\4\c\u\7\f\q\l\p\9\t\m\9\1\b\b\k\7\r\o\1\x\g\b\l\n\x\e\u\8\n\r\q\o\k\c\1\8\v\6\j\9\i\x\m\p\8\b\y\3\b\y\l\r\k\p ]] 00:10:21.464 23:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:10:21.464 23:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 699x9gk0ftef18wq61esoyg6tkajysaq2k5yofck00te99jo0s2q92kml5w40zj4adtgevh3ahrsokm8jubz7onaitkvu3ak2t662ab1bgkx0nb4eufl5njgy9tft4pgqyfu283upukukkotexmzqh87qzukgoneeiq61noi7suzqw17abpssisw2s625p0qjqvk42odxhg3ckt7z9bsjct5c51q6mvq0qxxy4jnqqmjlrobphtnxh4mtzdfb4gnotjwfz7h7ymae9qsc048cdyxt6celdv1o1j7p3h387znkwlzml0pay17ciy2f0l2bi93j8r1dup8gc7udtr8aaa7hv2d0mkzyuews135vdtqo3v1zd37cifcfcqv9od9jgaiwzgoiz3dbyanxw8jnjx47xis35m2sgcbtnh7aon2la5a35l9o1ukdhw4lzpx35jzs1dd4q100tohc4436lgbrw78qhbesmq6lt0dk13k0lvmo23v0hv9f1ww6eavdmsqhg7n7uqshtsljjpta9ff8n28aypwgldd752ai9xcb9js6vmftrgu4ktful0zcjk36susrb6dhyz4oz1prj6dr5vmgc7ucod9ioandlpybtk3owf4vbuho1sd9pdfu1aqzx94imcgqpms57y3s02jmm9cyn0606yujoiro5agtyrnrqfnriaixmsj0rk04f8hegjfmo03x448j6sgdl8fc3jjfmrr5a83leabe6x4qi4m18k814kbvir0y53l4xowuq70nm23wlw7flklp19h1v6tckwj8z57shce1lntaio90p16p1xon7l5ek1e6najdpjz3x8w1um4tjncw8xx4dfwagov1v3famx5fhobrif1wro8nvx6o79uwgjcyji9yhgjvqcy8a5r7ks6y8nhs2zw3e0h9hobww4nbjevowvjkx8rg236g0jpv79cu439kmd674cu7fqlp9tm91bbk7ro1xgblnxeu8nrqokc18v6j9ixmp8by3bylrkp == \6\9\9\x\9\g\k\0\f\t\e\f\1\8\w\q\6\1\e\s\o\y\g\6\t\k\a\j\y\s\a\q\2\k\5\y\o\f\c\k\0\0\t\e\9\9\j\o\0\s\2\q\9\2\k\m\l\5\w\4\0\z\j\4\a\d\t\g\e\v\h\3\a\h\r\s\o\k\m\8\j\u\b\z\7\o\n\a\i\t\k\v\u\3\a\k\2\t\6\6\2\a\b\1\b\g\k\x\0\n\b\4\e\u\f\l\5\n\j\g\y\9\t\f\t\4\p\g\q\y\f\u\2\8\3\u\p\u\k\u\k\k\o\t\e\x\m\z\q\h\8\7\q\z\u\k\g\o\n\e\e\i\q\6\1\n\o\i\7\s\u\z\q\w\1\7\a\b\p\s\s\i\s\w\2\s\6\2\5\p\0\q\j\q\v\k\4\2\o\d\x\h\g\3\c\k\t\7\z\9\b\s\j\c\t\5\c\5\1\q\6\m\v\q\0\q\x\x\y\4\j\n\q\q\m\j\l\r\o\b\p\h\t\n\x\h\4\m\t\z\d\f\b\4\g\n\o\t\j\w\f\z\7\h\7\y\m\a\e\9\q\s\c\0\4\8\c\d\y\x\t\6\c\e\l\d\v\1\o\1\j\7\p\3\h\3\8\7\z\n\k\w\l\z\m\l\0\p\a\y\1\7\c\i\y\2\f\0\l\2\b\i\9\3\j\8\r\1\d\u\p\8\g\c\7\u\d\t\r\8\a\a\a\7\h\v\2\d\0\m\k\z\y\u\e\w\s\1\3\5\v\d\t\q\o\3\v\1\z\d\3\7\c\i\f\c\f\c\q\v\9\o\d\9\j\g\a\i\w\z\g\o\i\z\3\d\b\y\a\n\x\w\8\j\n\j\x\4\7\x\i\s\3\5\m\2\s\g\c\b\t\n\h\7\a\o\n\2\l\a\5\a\3\5\l\9\o\1\u\k\d\h\w\4\l\z\p\x\3\5\j\z\s\1\d\d\4\q\1\0\0\t\o\h\c\4\4\3\6\l\g\b\r\w\7\8\q\h\b\e\s\m\q\6\l\t\0\d\k\1\3\k\0\l\v\m\o\2\3\v\0\h\v\9\f\1\w\w\6\e\a\v\d\m\s\q\h\g\7\n\7\u\q\s\h\t\s\l\j\j\p\t\a\9\f\f\8\n\2\8\a\y\p\w\g\l\d\d\7\5\2\a\i\9\x\c\b\9\j\s\6\v\m\f\t\r\g\u\4\k\t\f\u\l\0\z\c\j\k\3\6\s\u\s\r\b\6\d\h\y\z\4\o\z\1\p\r\j\6\d\r\5\v\m\g\c\7\u\c\o\d\9\i\o\a\n\d\l\p\y\b\t\k\3\o\w\f\4\v\b\u\h\o\1\s\d\9\p\d\f\u\1\a\q\z\x\9\4\i\m\c\g\q\p\m\s\5\7\y\3\s\0\2\j\m\m\9\c\y\n\0\6\0\6\y\u\j\o\i\r\o\5\a\g\t\y\r\n\r\q\f\n\r\i\a\i\x\m\s\j\0\r\k\0\4\f\8\h\e\g\j\f\m\o\0\3\x\4\4\8\j\6\s\g\d\l\8\f\c\3\j\j\f\m\r\r\5\a\8\3\l\e\a\b\e\6\x\4\q\i\4\m\1\8\k\8\1\4\k\b\v\i\r\0\y\5\3\l\4\x\o\w\u\q\7\0\n\m\2\3\w\l\w\7\f\l\k\l\p\1\9\h\1\v\6\t\c\k\w\j\8\z\5\7\s\h\c\e\1\l\n\t\a\i\o\9\0\p\1\6\p\1\x\o\n\7\l\5\e\k\1\e\6\n\a\j\d\p\j\z\3\x\8\w\1\u\m\4\t\j\n\c\w\8\x\x\4\d\f\w\a\g\o\v\1\v\3\f\a\m\x\5\f\h\o\b\r\i\f\1\w\r\o\8\n\v\x\6\o\7\9\u\w\g\j\c\y\j\i\9\y\h\g\j\v\q\c\y\8\a\5\r\7\k\s\6\y\8\n\h\s\2\z\w\3\e\0\h\9\h\o\b\w\w\4\n\b\j\e\v\o\w\v\j\k\x\8\r\g\2\3\6\g\0\j\p\v\7\9\c\u\4\3\9\k\m\d\6\7\4\c\u\7\f\q\l\p\9\t\m\9\1\b\b\k\7\r\o\1\x\g\b\l\n\x\e\u\8\n\r\q\o\k\c\1\8\v\6\j\9\i\x\m\p\8\b\y\3\b\y\l\r\k\p ]] 00:10:21.464 23:10:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:22.031 23:10:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:10:22.031 23:10:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:10:22.031 23:10:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:22.031 23:10:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:22.031 [2024-07-24 23:10:44.263586] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:22.031 [2024-07-24 23:10:44.263675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64392 ] 00:10:22.031 { 00:10:22.031 "subsystems": [ 00:10:22.031 { 00:10:22.031 "subsystem": "bdev", 00:10:22.031 "config": [ 00:10:22.031 { 00:10:22.031 "params": { 00:10:22.031 "block_size": 512, 00:10:22.031 "num_blocks": 1048576, 00:10:22.031 "name": "malloc0" 00:10:22.031 }, 00:10:22.031 "method": "bdev_malloc_create" 00:10:22.031 }, 00:10:22.031 { 00:10:22.031 "params": { 00:10:22.031 "filename": "/dev/zram1", 00:10:22.031 "name": "uring0" 00:10:22.031 }, 00:10:22.031 "method": "bdev_uring_create" 00:10:22.031 }, 00:10:22.031 { 00:10:22.031 "method": "bdev_wait_for_examine" 00:10:22.031 } 00:10:22.031 ] 00:10:22.031 } 00:10:22.031 ] 00:10:22.031 } 00:10:22.031 [2024-07-24 23:10:44.400960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.031 [2024-07-24 23:10:44.514251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.289 [2024-07-24 23:10:44.571007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:26.355  Copying: 142/512 [MB] (142 MBps) Copying: 285/512 [MB] (143 MBps) Copying: 427/512 [MB] (142 MBps) Copying: 512/512 [MB] (average 142 MBps) 00:10:26.355 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:26.355 23:10:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:26.355 [2024-07-24 23:10:48.824290] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:26.355 [2024-07-24 23:10:48.824835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64453 ] 00:10:26.658 { 00:10:26.658 "subsystems": [ 00:10:26.658 { 00:10:26.658 "subsystem": "bdev", 00:10:26.658 "config": [ 00:10:26.658 { 00:10:26.658 "params": { 00:10:26.658 "block_size": 512, 00:10:26.658 "num_blocks": 1048576, 00:10:26.658 "name": "malloc0" 00:10:26.658 }, 00:10:26.658 "method": "bdev_malloc_create" 00:10:26.658 }, 00:10:26.658 { 00:10:26.658 "params": { 00:10:26.658 "filename": "/dev/zram1", 00:10:26.658 "name": "uring0" 00:10:26.658 }, 00:10:26.658 "method": "bdev_uring_create" 00:10:26.658 }, 00:10:26.658 { 00:10:26.658 "params": { 00:10:26.658 "name": "uring0" 00:10:26.658 }, 00:10:26.658 "method": "bdev_uring_delete" 00:10:26.658 }, 00:10:26.658 { 00:10:26.658 "method": "bdev_wait_for_examine" 00:10:26.658 } 00:10:26.658 ] 00:10:26.658 } 00:10:26.658 ] 00:10:26.658 } 00:10:26.658 [2024-07-24 23:10:48.959425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.658 [2024-07-24 23:10:49.076637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.658 [2024-07-24 23:10:49.133758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:27.484  Copying: 0/0 [B] (average 0 Bps) 00:10:27.484 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:27.484 23:10:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:27.484 [2024-07-24 23:10:49.809891] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:27.485 [2024-07-24 23:10:49.809971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64482 ] 00:10:27.485 { 00:10:27.485 "subsystems": [ 00:10:27.485 { 00:10:27.485 "subsystem": "bdev", 00:10:27.485 "config": [ 00:10:27.485 { 00:10:27.485 "params": { 00:10:27.485 "block_size": 512, 00:10:27.485 "num_blocks": 1048576, 00:10:27.485 "name": "malloc0" 00:10:27.485 }, 00:10:27.485 "method": "bdev_malloc_create" 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "params": { 00:10:27.485 "filename": "/dev/zram1", 00:10:27.485 "name": "uring0" 00:10:27.485 }, 00:10:27.485 "method": "bdev_uring_create" 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "params": { 00:10:27.485 "name": "uring0" 00:10:27.485 }, 00:10:27.485 "method": "bdev_uring_delete" 00:10:27.485 }, 00:10:27.485 { 00:10:27.485 "method": "bdev_wait_for_examine" 00:10:27.485 } 00:10:27.485 ] 00:10:27.485 } 00:10:27.485 ] 00:10:27.485 } 00:10:27.485 [2024-07-24 23:10:49.946192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.743 [2024-07-24 23:10:50.061632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.743 [2024-07-24 23:10:50.115768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:28.001 [2024-07-24 23:10:50.318984] bdev.c:8154:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:10:28.001 [2024-07-24 23:10:50.319041] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:10:28.001 [2024-07-24 23:10:50.319053] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:10:28.001 [2024-07-24 23:10:50.319064] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:28.260 [2024-07-24 23:10:50.630691] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:10:28.260 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:10:28.518 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:28.518 00:10:28.518 real 0m15.956s 00:10:28.518 ************************************ 00:10:28.518 END TEST dd_uring_copy 00:10:28.518 ************************************ 00:10:28.518 user 0m10.815s 00:10:28.518 sys 0m12.892s 00:10:28.518 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.518 23:10:50 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:28.776 23:10:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:10:28.776 00:10:28.776 real 0m16.087s 00:10:28.776 user 0m10.870s 00:10:28.776 sys 0m12.968s 00:10:28.776 23:10:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.776 23:10:51 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:28.776 ************************************ 00:10:28.776 END TEST spdk_dd_uring 00:10:28.776 ************************************ 00:10:28.776 23:10:51 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:28.776 23:10:51 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:28.776 23:10:51 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:28.776 23:10:51 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.776 23:10:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:28.776 ************************************ 00:10:28.776 START TEST spdk_dd_sparse 00:10:28.776 ************************************ 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:28.776 * Looking for test storage... 00:10:28.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.776 23:10:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:10:28.777 1+0 records in 00:10:28.777 1+0 records out 00:10:28.777 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00719908 s, 583 MB/s 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:10:28.777 1+0 records in 00:10:28.777 1+0 records out 00:10:28.777 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.005891 s, 712 MB/s 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:10:28.777 1+0 records in 00:10:28.777 1+0 records out 00:10:28.777 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00740632 s, 566 MB/s 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:28.777 ************************************ 00:10:28.777 START TEST dd_sparse_file_to_file 00:10:28.777 ************************************ 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:28.777 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:28.777 { 00:10:28.777 "subsystems": [ 00:10:28.777 { 00:10:28.777 "subsystem": "bdev", 00:10:28.777 "config": [ 00:10:28.777 { 00:10:28.777 "params": { 00:10:28.777 "block_size": 4096, 00:10:28.777 "filename": "dd_sparse_aio_disk", 00:10:28.777 "name": "dd_aio" 00:10:28.777 }, 00:10:28.777 "method": "bdev_aio_create" 00:10:28.777 }, 00:10:28.777 { 00:10:28.777 "params": { 00:10:28.777 "lvs_name": "dd_lvstore", 00:10:28.777 "bdev_name": "dd_aio" 00:10:28.777 }, 00:10:28.777 "method": "bdev_lvol_create_lvstore" 00:10:28.777 }, 00:10:28.777 { 00:10:28.777 "method": "bdev_wait_for_examine" 00:10:28.777 } 00:10:28.777 ] 00:10:28.777 } 00:10:28.777 ] 00:10:28.777 } 00:10:28.777 [2024-07-24 23:10:51.259510] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:28.777 [2024-07-24 23:10:51.259613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64574 ] 00:10:29.036 [2024-07-24 23:10:51.400330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.293 [2024-07-24 23:10:51.530753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.293 [2024-07-24 23:10:51.588498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:29.553  Copying: 12/36 [MB] (average 1000 MBps) 00:10:29.553 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:29.553 00:10:29.553 real 0m0.766s 00:10:29.553 user 0m0.493s 00:10:29.553 sys 0m0.360s 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.553 ************************************ 00:10:29.553 END TEST dd_sparse_file_to_file 00:10:29.553 ************************************ 00:10:29.553 23:10:51 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:29.553 ************************************ 00:10:29.553 START TEST dd_sparse_file_to_bdev 00:10:29.553 ************************************ 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:29.553 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:29.811 [2024-07-24 23:10:52.074870] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:29.811 [2024-07-24 23:10:52.074986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64616 ] 00:10:29.811 { 00:10:29.811 "subsystems": [ 00:10:29.811 { 00:10:29.811 "subsystem": "bdev", 00:10:29.811 "config": [ 00:10:29.811 { 00:10:29.811 "params": { 00:10:29.811 "block_size": 4096, 00:10:29.811 "filename": "dd_sparse_aio_disk", 00:10:29.811 "name": "dd_aio" 00:10:29.811 }, 00:10:29.811 "method": "bdev_aio_create" 00:10:29.811 }, 00:10:29.811 { 00:10:29.811 "params": { 00:10:29.811 "lvs_name": "dd_lvstore", 00:10:29.811 "lvol_name": "dd_lvol", 00:10:29.811 "size_in_mib": 36, 00:10:29.811 "thin_provision": true 00:10:29.811 }, 00:10:29.811 "method": "bdev_lvol_create" 00:10:29.811 }, 00:10:29.811 { 00:10:29.811 "method": "bdev_wait_for_examine" 00:10:29.811 } 00:10:29.811 ] 00:10:29.811 } 00:10:29.811 ] 00:10:29.811 } 00:10:29.811 [2024-07-24 23:10:52.212270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.069 [2024-07-24 23:10:52.340171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.069 [2024-07-24 23:10:52.395936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:30.327  Copying: 12/36 [MB] (average 480 MBps) 00:10:30.327 00:10:30.327 00:10:30.327 real 0m0.725s 00:10:30.327 user 0m0.482s 00:10:30.327 sys 0m0.355s 00:10:30.327 ************************************ 00:10:30.327 END TEST dd_sparse_file_to_bdev 00:10:30.327 ************************************ 00:10:30.327 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.327 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:30.327 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:30.327 23:10:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:30.327 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.327 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:30.328 ************************************ 00:10:30.328 START TEST dd_sparse_bdev_to_file 00:10:30.328 ************************************ 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:30.328 23:10:52 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:30.587 [2024-07-24 23:10:52.846786] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:30.587 [2024-07-24 23:10:52.846891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64653 ] 00:10:30.587 { 00:10:30.587 "subsystems": [ 00:10:30.587 { 00:10:30.587 "subsystem": "bdev", 00:10:30.587 "config": [ 00:10:30.587 { 00:10:30.587 "params": { 00:10:30.587 "block_size": 4096, 00:10:30.587 "filename": "dd_sparse_aio_disk", 00:10:30.587 "name": "dd_aio" 00:10:30.587 }, 00:10:30.587 "method": "bdev_aio_create" 00:10:30.587 }, 00:10:30.587 { 00:10:30.587 "method": "bdev_wait_for_examine" 00:10:30.587 } 00:10:30.587 ] 00:10:30.587 } 00:10:30.587 ] 00:10:30.587 } 00:10:30.587 [2024-07-24 23:10:52.984454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.845 [2024-07-24 23:10:53.119155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.845 [2024-07-24 23:10:53.180590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:31.109  Copying: 12/36 [MB] (average 1000 MBps) 00:10:31.109 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:31.109 00:10:31.109 real 0m0.731s 00:10:31.109 user 0m0.477s 00:10:31.109 sys 0m0.345s 00:10:31.109 ************************************ 00:10:31.109 END TEST dd_sparse_bdev_to_file 00:10:31.109 ************************************ 00:10:31.109 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:31.110 00:10:31.110 real 0m2.522s 00:10:31.110 user 0m1.559s 00:10:31.110 sys 0m1.242s 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.110 ************************************ 00:10:31.110 END TEST spdk_dd_sparse 00:10:31.110 ************************************ 00:10:31.110 23:10:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:31.367 23:10:53 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:31.367 23:10:53 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:31.367 23:10:53 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.367 23:10:53 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.367 23:10:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:31.367 ************************************ 00:10:31.367 START TEST spdk_dd_negative 00:10:31.367 ************************************ 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:31.367 * Looking for test storage... 00:10:31.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.367 ************************************ 00:10:31.367 START TEST dd_invalid_arguments 00:10:31.367 ************************************ 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.367 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:31.367 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:31.367 00:10:31.367 CPU options: 00:10:31.367 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:31.367 (like [0,1,10]) 00:10:31.367 --lcores lcore to CPU mapping list. The list is in the format: 00:10:31.367 [<,lcores[@CPUs]>...] 00:10:31.367 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:31.367 Within the group, '-' is used for range separator, 00:10:31.367 ',' is used for single number separator. 00:10:31.367 '( )' can be omitted for single element group, 00:10:31.367 '@' can be omitted if cpus and lcores have the same value 00:10:31.367 --disable-cpumask-locks Disable CPU core lock files. 00:10:31.367 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:31.367 pollers in the app support interrupt mode) 00:10:31.367 -p, --main-core main (primary) core for DPDK 00:10:31.367 00:10:31.367 Configuration options: 00:10:31.367 -c, --config, --json JSON config file 00:10:31.367 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:31.367 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:31.367 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:31.367 --rpcs-allowed comma-separated list of permitted RPCS 00:10:31.367 --json-ignore-init-errors don't exit on invalid config entry 00:10:31.367 00:10:31.367 Memory options: 00:10:31.367 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:31.367 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:31.367 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:31.367 -R, --huge-unlink unlink huge files after initialization 00:10:31.367 -n, --mem-channels number of memory channels used for DPDK 00:10:31.367 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:31.367 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:31.367 --no-huge run without using hugepages 00:10:31.367 -i, --shm-id shared memory ID (optional) 00:10:31.367 -g, --single-file-segments force creating just one hugetlbfs file 00:10:31.367 00:10:31.367 PCI options: 00:10:31.367 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:31.367 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:31.367 -u, --no-pci disable PCI access 00:10:31.367 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:31.367 00:10:31.367 Log options: 00:10:31.367 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:31.367 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:31.367 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:31.367 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:31.367 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:10:31.367 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:10:31.367 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:10:31.367 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:10:31.367 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:10:31.368 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:10:31.368 virtio_vfio_user, vmd) 00:10:31.368 --silence-noticelog disable notice level logging to stderr 00:10:31.368 00:10:31.368 Trace options: 00:10:31.368 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:31.368 setting 0 to disable trace (default 32768) 00:10:31.368 Tracepoints vary in size and can use more than one trace entry. 00:10:31.368 -e, --tpoint-group [:] 00:10:31.368 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:10:31.368 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:10:31.368 [2024-07-24 23:10:53.832025] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:10:31.626 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:10:31.626 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:10:31.626 a tracepoint group. First tpoint inside a group can be enabled by 00:10:31.626 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:10:31.626 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:10:31.626 in /include/spdk_internal/trace_defs.h 00:10:31.626 00:10:31.626 Other options: 00:10:31.626 -h, --help show this usage 00:10:31.626 -v, --version print SPDK version 00:10:31.626 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:10:31.626 --env-context Opaque context for use of the env implementation 00:10:31.626 00:10:31.626 Application specific: 00:10:31.626 [--------- DD Options ---------] 00:10:31.626 --if Input file. Must specify either --if or --ib. 00:10:31.626 --ib Input bdev. Must specifier either --if or --ib 00:10:31.626 --of Output file. Must specify either --of or --ob. 00:10:31.626 --ob Output bdev. Must specify either --of or --ob. 00:10:31.626 --iflag Input file flags. 00:10:31.626 --oflag Output file flags. 00:10:31.626 --bs I/O unit size (default: 4096) 00:10:31.626 --qd Queue depth (default: 2) 00:10:31.626 --count I/O unit count. The number of I/O units to copy. (default: all) 00:10:31.626 --skip Skip this many I/O units at start of input. (default: 0) 00:10:31.626 --seek Skip this many I/O units at start of output. (default: 0) 00:10:31.626 --aio Force usage of AIO. (by default io_uring is used if available) 00:10:31.626 --sparse Enable hole skipping in input target 00:10:31.626 Available iflag and oflag values: 00:10:31.626 append - append mode 00:10:31.626 direct - use direct I/O for data 00:10:31.626 directory - fail unless a directory 00:10:31.626 dsync - use synchronized I/O for data 00:10:31.626 noatime - do not update access time 00:10:31.626 noctty - do not assign controlling terminal from file 00:10:31.626 nofollow - do not follow symlinks 00:10:31.626 nonblock - use non-blocking I/O 00:10:31.626 sync - use synchronized I/O for data and metadata 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.626 00:10:31.626 real 0m0.105s 00:10:31.626 user 0m0.069s 00:10:31.626 sys 0m0.034s 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.626 ************************************ 00:10:31.626 END TEST dd_invalid_arguments 00:10:31.626 ************************************ 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.626 ************************************ 00:10:31.626 START TEST dd_double_input 00:10:31.626 ************************************ 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:31.626 [2024-07-24 23:10:53.969083] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.626 00:10:31.626 real 0m0.087s 00:10:31.626 user 0m0.059s 00:10:31.626 sys 0m0.026s 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.626 ************************************ 00:10:31.626 END TEST dd_double_input 00:10:31.626 ************************************ 00:10:31.626 23:10:53 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.626 ************************************ 00:10:31.626 START TEST dd_double_output 00:10:31.626 ************************************ 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.626 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:31.626 [2024-07-24 23:10:54.098559] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.885 00:10:31.885 real 0m0.073s 00:10:31.885 user 0m0.046s 00:10:31.885 sys 0m0.026s 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:31.885 ************************************ 00:10:31.885 END TEST dd_double_output 00:10:31.885 ************************************ 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.885 ************************************ 00:10:31.885 START TEST dd_no_input 00:10:31.885 ************************************ 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:31.885 [2024-07-24 23:10:54.220721] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.885 00:10:31.885 real 0m0.073s 00:10:31.885 user 0m0.043s 00:10:31.885 sys 0m0.029s 00:10:31.885 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.885 ************************************ 00:10:31.886 END TEST dd_no_input 00:10:31.886 ************************************ 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.886 ************************************ 00:10:31.886 START TEST dd_no_output 00:10:31.886 ************************************ 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:31.886 [2024-07-24 23:10:54.344235] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:31.886 00:10:31.886 real 0m0.075s 00:10:31.886 user 0m0.048s 00:10:31.886 sys 0m0.026s 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.886 23:10:54 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:31.886 ************************************ 00:10:31.886 END TEST dd_no_output 00:10:31.886 ************************************ 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:32.145 ************************************ 00:10:32.145 START TEST dd_wrong_blocksize 00:10:32.145 ************************************ 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:32.145 [2024-07-24 23:10:54.460800] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:32.145 00:10:32.145 real 0m0.063s 00:10:32.145 user 0m0.032s 00:10:32.145 sys 0m0.031s 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:32.145 ************************************ 00:10:32.145 END TEST dd_wrong_blocksize 00:10:32.145 ************************************ 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:32.145 ************************************ 00:10:32.145 START TEST dd_smaller_blocksize 00:10:32.145 ************************************ 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:32.145 23:10:54 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:32.145 [2024-07-24 23:10:54.584590] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:32.145 [2024-07-24 23:10:54.584697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64873 ] 00:10:32.429 [2024-07-24 23:10:54.725040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.429 [2024-07-24 23:10:54.853352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.710 [2024-07-24 23:10:54.910630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:32.969 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:32.969 [2024-07-24 23:10:55.240772] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:32.969 [2024-07-24 23:10:55.240828] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:32.969 [2024-07-24 23:10:55.361677] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.229 00:10:33.229 real 0m0.939s 00:10:33.229 user 0m0.429s 00:10:33.229 sys 0m0.403s 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.229 ************************************ 00:10:33.229 END TEST dd_smaller_blocksize 00:10:33.229 ************************************ 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:33.229 ************************************ 00:10:33.229 START TEST dd_invalid_count 00:10:33.229 ************************************ 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:33.229 [2024-07-24 23:10:55.579504] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.229 00:10:33.229 real 0m0.076s 00:10:33.229 user 0m0.047s 00:10:33.229 sys 0m0.028s 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:33.229 ************************************ 00:10:33.229 END TEST dd_invalid_count 00:10:33.229 ************************************ 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:33.229 ************************************ 00:10:33.229 START TEST dd_invalid_oflag 00:10:33.229 ************************************ 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.229 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:33.229 [2024-07-24 23:10:55.706174] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.488 00:10:33.488 real 0m0.075s 00:10:33.488 user 0m0.048s 00:10:33.488 sys 0m0.026s 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:33.488 ************************************ 00:10:33.488 END TEST dd_invalid_oflag 00:10:33.488 ************************************ 00:10:33.488 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:33.489 ************************************ 00:10:33.489 START TEST dd_invalid_iflag 00:10:33.489 ************************************ 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:33.489 [2024-07-24 23:10:55.833427] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.489 00:10:33.489 real 0m0.077s 00:10:33.489 user 0m0.054s 00:10:33.489 sys 0m0.023s 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.489 ************************************ 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:33.489 END TEST dd_invalid_iflag 00:10:33.489 ************************************ 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:33.489 ************************************ 00:10:33.489 START TEST dd_unknown_flag 00:10:33.489 ************************************ 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.489 23:10:55 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:33.489 [2024-07-24 23:10:55.959396] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:33.489 [2024-07-24 23:10:55.959485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64965 ] 00:10:33.748 [2024-07-24 23:10:56.101011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.748 [2024-07-24 23:10:56.224917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.006 [2024-07-24 23:10:56.282876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:34.006 [2024-07-24 23:10:56.319335] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:34.006 [2024-07-24 23:10:56.319400] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.006 [2024-07-24 23:10:56.319486] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:34.006 [2024-07-24 23:10:56.319502] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.006 [2024-07-24 23:10:56.319794] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:34.006 [2024-07-24 23:10:56.319833] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.006 [2024-07-24 23:10:56.319889] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:34.006 [2024-07-24 23:10:56.319903] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:34.006 [2024-07-24 23:10:56.435470] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:34.265 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:10:34.265 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:34.265 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:10:34.265 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:10:34.265 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:10:34.265 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:34.265 00:10:34.265 real 0m0.638s 00:10:34.265 user 0m0.381s 00:10:34.265 sys 0m0.158s 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:34.266 ************************************ 00:10:34.266 END TEST dd_unknown_flag 00:10:34.266 ************************************ 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:34.266 ************************************ 00:10:34.266 START TEST dd_invalid_json 00:10:34.266 ************************************ 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:34.266 23:10:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:34.266 [2024-07-24 23:10:56.655159] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:34.266 [2024-07-24 23:10:56.655263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64999 ] 00:10:34.525 [2024-07-24 23:10:56.794386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.525 [2024-07-24 23:10:56.927281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.525 [2024-07-24 23:10:56.927367] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:34.525 [2024-07-24 23:10:56.927389] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:34.525 [2024-07-24 23:10:56.927401] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:34.525 [2024-07-24 23:10:56.927447] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:34.783 00:10:34.783 real 0m0.443s 00:10:34.783 user 0m0.265s 00:10:34.783 sys 0m0.076s 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.783 ************************************ 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:34.783 END TEST dd_invalid_json 00:10:34.783 ************************************ 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:10:34.783 00:10:34.783 real 0m3.436s 00:10:34.783 user 0m1.769s 00:10:34.783 sys 0m1.309s 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.783 23:10:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:34.783 ************************************ 00:10:34.783 END TEST spdk_dd_negative 00:10:34.783 ************************************ 00:10:34.783 23:10:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:10:34.783 ************************************ 00:10:34.783 END TEST spdk_dd 00:10:34.783 ************************************ 00:10:34.783 00:10:34.783 real 1m21.788s 00:10:34.783 user 0m54.148s 00:10:34.783 sys 0m34.156s 00:10:34.783 23:10:57 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.783 23:10:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:34.783 23:10:57 -- common/autotest_common.sh@1142 -- # return 0 00:10:34.783 23:10:57 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:10:34.783 23:10:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:34.783 23:10:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:34.783 23:10:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:34.783 23:10:57 -- common/autotest_common.sh@10 -- # set +x 00:10:34.783 23:10:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:34.783 23:10:57 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:10:34.784 23:10:57 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:10:34.784 23:10:57 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:10:34.784 23:10:57 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:10:34.784 23:10:57 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:10:34.784 23:10:57 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:34.784 23:10:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:34.784 23:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.784 23:10:57 -- common/autotest_common.sh@10 -- # set +x 00:10:34.784 ************************************ 00:10:34.784 START TEST nvmf_tcp 00:10:34.784 ************************************ 00:10:34.784 23:10:57 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:35.042 * Looking for test storage... 00:10:35.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.042 23:10:57 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.043 23:10:57 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.043 23:10:57 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.043 23:10:57 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.043 23:10:57 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:10:35.043 23:10:57 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:35.043 23:10:57 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.043 23:10:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:10:35.043 23:10:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:35.043 23:10:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.043 23:10:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.043 23:10:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.043 ************************************ 00:10:35.043 START TEST nvmf_host_management 00:10:35.043 ************************************ 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:35.043 * Looking for test storage... 00:10:35.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:35.043 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:35.044 Cannot find device "nvmf_init_br" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:35.044 Cannot find device "nvmf_tgt_br" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.044 Cannot find device "nvmf_tgt_br2" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:35.044 Cannot find device "nvmf_init_br" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:35.044 Cannot find device "nvmf_tgt_br" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:35.044 Cannot find device "nvmf_tgt_br2" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:35.044 Cannot find device "nvmf_br" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:35.044 Cannot find device "nvmf_init_if" 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:35.044 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.350 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.350 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.350 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:35.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:10:35.351 00:10:35.351 --- 10.0.0.2 ping statistics --- 00:10:35.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.351 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:35.351 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.351 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:35.351 00:10:35.351 --- 10.0.0.3 ping statistics --- 00:10:35.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.351 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:35.351 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:10:35.609 00:10:35.609 --- 10.0.0.1 ping statistics --- 00:10:35.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.609 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65263 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65263 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65263 ']' 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.609 23:10:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:35.609 [2024-07-24 23:10:57.905581] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:35.609 [2024-07-24 23:10:57.905707] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.609 [2024-07-24 23:10:58.045784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.867 [2024-07-24 23:10:58.203735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.867 [2024-07-24 23:10:58.203827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.867 [2024-07-24 23:10:58.203838] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.867 [2024-07-24 23:10:58.203846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.867 [2024-07-24 23:10:58.203854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.867 [2024-07-24 23:10:58.204003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.867 [2024-07-24 23:10:58.204110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.867 [2024-07-24 23:10:58.204171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:35.867 [2024-07-24 23:10:58.204175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.867 [2024-07-24 23:10:58.284289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.125 [2024-07-24 23:10:58.415156] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.125 Malloc0 00:10:36.125 [2024-07-24 23:10:58.505518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65315 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65315 /var/tmp/bdevperf.sock 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65315 ']' 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:36.125 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:36.126 { 00:10:36.126 "params": { 00:10:36.126 "name": "Nvme$subsystem", 00:10:36.126 "trtype": "$TEST_TRANSPORT", 00:10:36.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:36.126 "adrfam": "ipv4", 00:10:36.126 "trsvcid": "$NVMF_PORT", 00:10:36.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:36.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:36.126 "hdgst": ${hdgst:-false}, 00:10:36.126 "ddgst": ${ddgst:-false} 00:10:36.126 }, 00:10:36.126 "method": "bdev_nvme_attach_controller" 00:10:36.126 } 00:10:36.126 EOF 00:10:36.126 )") 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:36.126 23:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:36.126 "params": { 00:10:36.126 "name": "Nvme0", 00:10:36.126 "trtype": "tcp", 00:10:36.126 "traddr": "10.0.0.2", 00:10:36.126 "adrfam": "ipv4", 00:10:36.126 "trsvcid": "4420", 00:10:36.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:36.126 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:36.126 "hdgst": false, 00:10:36.126 "ddgst": false 00:10:36.126 }, 00:10:36.126 "method": "bdev_nvme_attach_controller" 00:10:36.126 }' 00:10:36.384 [2024-07-24 23:10:58.615620] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:36.384 [2024-07-24 23:10:58.615720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65315 ] 00:10:36.384 [2024-07-24 23:10:58.756837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.642 [2024-07-24 23:10:58.886841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.642 [2024-07-24 23:10:58.952633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:36.642 Running I/O for 10 seconds... 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:37.207 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:37.467 [2024-07-24 23:10:59.713706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.467 [2024-07-24 23:10:59.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.713791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.467 [2024-07-24 23:10:59.713801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.713812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.467 [2024-07-24 23:10:59.713821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.713831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:37.467 [2024-07-24 23:10:59.713840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.713850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe7d50 is same with the state(5) to be set 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.467 23:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:37.467 [2024-07-24 23:10:59.735327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.467 [2024-07-24 23:10:59.735751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.467 [2024-07-24 23:10:59.735788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.735977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.735987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.468 [2024-07-24 23:10:59.736718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.468 [2024-07-24 23:10:59.736730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:37.469 [2024-07-24 23:10:59.736965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:37.469 [2024-07-24 23:10:59.736976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeff50 is same with the state(5) to be set 00:10:37.469 task offset: 122880 on job bdev=Nvme0n1 fails 00:10:37.469 00:10:37.469 Latency(us) 00:10:37.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.469 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:37.469 Job: Nvme0n1 ended in about 0.67 seconds with error 00:10:37.469 Verification LBA range: start 0x0 length 0x400 00:10:37.469 Nvme0n1 : 0.67 1441.74 90.11 96.12 0.00 40381.26 2249.08 45279.42 00:10:37.469 =================================================================================================================== 00:10:37.469 Total : 1441.74 90.11 96.12 0.00 40381.26 2249.08 45279.42 00:10:37.469 [2024-07-24 23:10:59.737054] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfeff50 was disconnected and freed. reset controller. 00:10:37.469 [2024-07-24 23:10:59.737180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe7d50 (9): Bad file descriptor 00:10:37.469 [2024-07-24 23:10:59.738275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:37.469 [2024-07-24 23:10:59.740932] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.469 [2024-07-24 23:10:59.748492] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:38.402 23:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65315 00:10:38.402 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65315) - No such process 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:38.403 { 00:10:38.403 "params": { 00:10:38.403 "name": "Nvme$subsystem", 00:10:38.403 "trtype": "$TEST_TRANSPORT", 00:10:38.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.403 "adrfam": "ipv4", 00:10:38.403 "trsvcid": "$NVMF_PORT", 00:10:38.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.403 "hdgst": ${hdgst:-false}, 00:10:38.403 "ddgst": ${ddgst:-false} 00:10:38.403 }, 00:10:38.403 "method": "bdev_nvme_attach_controller" 00:10:38.403 } 00:10:38.403 EOF 00:10:38.403 )") 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:10:38.403 23:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:38.403 "params": { 00:10:38.403 "name": "Nvme0", 00:10:38.403 "trtype": "tcp", 00:10:38.403 "traddr": "10.0.0.2", 00:10:38.403 "adrfam": "ipv4", 00:10:38.403 "trsvcid": "4420", 00:10:38.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:38.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:38.403 "hdgst": false, 00:10:38.403 "ddgst": false 00:10:38.403 }, 00:10:38.403 "method": "bdev_nvme_attach_controller" 00:10:38.403 }' 00:10:38.403 [2024-07-24 23:11:00.790747] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:38.403 [2024-07-24 23:11:00.790845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65353 ] 00:10:38.660 [2024-07-24 23:11:00.930784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.660 [2024-07-24 23:11:01.058551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.660 [2024-07-24 23:11:01.123565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:38.918 Running I/O for 1 seconds... 00:10:39.851 00:10:39.851 Latency(us) 00:10:39.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.851 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:39.851 Verification LBA range: start 0x0 length 0x400 00:10:39.851 Nvme0n1 : 1.02 1504.03 94.00 0.00 0.00 41706.15 4230.05 40036.54 00:10:39.851 =================================================================================================================== 00:10:39.851 Total : 1504.03 94.00 0.00 0.00 41706.15 4230.05 40036.54 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.108 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.108 rmmod nvme_tcp 00:10:40.108 rmmod nvme_fabrics 00:10:40.108 rmmod nvme_keyring 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65263 ']' 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65263 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65263 ']' 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65263 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65263 00:10:40.366 killing process with pid 65263 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65263' 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65263 00:10:40.366 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65263 00:10:40.624 [2024-07-24 23:11:02.959917] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.624 23:11:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.624 23:11:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:40.624 23:11:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:40.624 00:10:40.624 real 0m5.714s 00:10:40.624 user 0m21.622s 00:10:40.624 sys 0m1.592s 00:10:40.624 23:11:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.624 ************************************ 00:10:40.624 END TEST nvmf_host_management 00:10:40.624 ************************************ 00:10:40.624 23:11:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:40.624 23:11:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.624 23:11:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:40.624 23:11:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.624 23:11:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.624 23:11:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.624 ************************************ 00:10:40.624 START TEST nvmf_lvol 00:10:40.624 ************************************ 00:10:40.624 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:40.882 * Looking for test storage... 00:10:40.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.882 23:11:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:40.883 Cannot find device "nvmf_tgt_br" 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.883 Cannot find device "nvmf_tgt_br2" 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:40.883 Cannot find device "nvmf_tgt_br" 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:40.883 Cannot find device "nvmf_tgt_br2" 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.883 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.883 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:41.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:10:41.140 00:10:41.140 --- 10.0.0.2 ping statistics --- 00:10:41.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.140 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:41.140 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:41.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:41.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:10:41.141 00:10:41.141 --- 10.0.0.3 ping statistics --- 00:10:41.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.141 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:41.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:41.141 00:10:41.141 --- 10.0.0.1 ping statistics --- 00:10:41.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.141 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:41.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65575 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65575 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65575 ']' 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.141 23:11:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:41.398 [2024-07-24 23:11:03.636213] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:41.398 [2024-07-24 23:11:03.636653] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.398 [2024-07-24 23:11:03.787057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:41.656 [2024-07-24 23:11:03.912873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.656 [2024-07-24 23:11:03.913174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.656 [2024-07-24 23:11:03.913312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.656 [2024-07-24 23:11:03.913442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.656 [2024-07-24 23:11:03.913484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.656 [2024-07-24 23:11:03.913767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.656 [2024-07-24 23:11:03.913834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.656 [2024-07-24 23:11:03.913843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.656 [2024-07-24 23:11:03.972947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.223 23:11:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:42.482 [2024-07-24 23:11:04.815534] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.482 23:11:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.741 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:42.741 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.001 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:43.001 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:43.259 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:43.518 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ec695bc9-7171-4436-bf5f-87932e2e8db5 00:10:43.518 23:11:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ec695bc9-7171-4436-bf5f-87932e2e8db5 lvol 20 00:10:43.777 23:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=15c92699-70a7-4886-935b-5c7666a03fa6 00:10:43.777 23:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:44.035 23:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 15c92699-70a7-4886-935b-5c7666a03fa6 00:10:44.293 23:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:44.550 [2024-07-24 23:11:06.898578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.550 23:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.815 23:11:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65645 00:10:44.815 23:11:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:44.815 23:11:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:46.187 23:11:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 15c92699-70a7-4886-935b-5c7666a03fa6 MY_SNAPSHOT 00:10:46.187 23:11:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bae2cf75-363f-47ad-a500-7a5b3d15f082 00:10:46.187 23:11:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 15c92699-70a7-4886-935b-5c7666a03fa6 30 00:10:46.444 23:11:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone bae2cf75-363f-47ad-a500-7a5b3d15f082 MY_CLONE 00:10:46.702 23:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=15ef3915-68b2-4ccb-bd85-379356df351c 00:10:46.702 23:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 15ef3915-68b2-4ccb-bd85-379356df351c 00:10:47.634 23:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65645 00:10:55.743 Initializing NVMe Controllers 00:10:55.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:55.743 Controller IO queue size 128, less than required. 00:10:55.743 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:55.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:55.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:55.743 Initialization complete. Launching workers. 00:10:55.743 ======================================================== 00:10:55.743 Latency(us) 00:10:55.743 Device Information : IOPS MiB/s Average min max 00:10:55.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8322.09 32.51 15383.19 2265.17 97710.49 00:10:55.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8124.59 31.74 15758.57 2108.15 96786.30 00:10:55.743 ======================================================== 00:10:55.743 Total : 16446.67 64.24 15568.63 2108.15 97710.49 00:10:55.743 00:10:55.743 23:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:55.743 23:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 15c92699-70a7-4886-935b-5c7666a03fa6 00:10:56.002 23:11:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec695bc9-7171-4436-bf5f-87932e2e8db5 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.260 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.260 rmmod nvme_tcp 00:10:56.260 rmmod nvme_fabrics 00:10:56.260 rmmod nvme_keyring 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65575 ']' 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65575 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65575 ']' 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65575 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65575 00:10:56.519 killing process with pid 65575 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65575' 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65575 00:10:56.519 23:11:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65575 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:56.777 ************************************ 00:10:56.777 END TEST nvmf_lvol 00:10:56.777 ************************************ 00:10:56.777 00:10:56.777 real 0m16.048s 00:10:56.777 user 1m6.418s 00:10:56.777 sys 0m4.264s 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:56.777 23:11:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:56.777 23:11:19 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:56.777 23:11:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:56.777 23:11:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.777 23:11:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:56.777 ************************************ 00:10:56.777 START TEST nvmf_lvs_grow 00:10:56.777 ************************************ 00:10:56.777 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:57.036 * Looking for test storage... 00:10:57.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:57.036 Cannot find device "nvmf_tgt_br" 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.036 Cannot find device "nvmf_tgt_br2" 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:10:57.036 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:57.037 Cannot find device "nvmf_tgt_br" 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:57.037 Cannot find device "nvmf_tgt_br2" 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.037 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:57.037 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:57.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:57.295 00:10:57.295 --- 10.0.0.2 ping statistics --- 00:10:57.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.295 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:57.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:57.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:57.295 00:10:57.295 --- 10.0.0.3 ping statistics --- 00:10:57.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.295 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:57.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:57.295 00:10:57.295 --- 10.0.0.1 ping statistics --- 00:10:57.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.295 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65975 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65975 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65975 ']' 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.295 23:11:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:57.295 [2024-07-24 23:11:19.682921] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:10:57.295 [2024-07-24 23:11:19.683207] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.554 [2024-07-24 23:11:19.821250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.554 [2024-07-24 23:11:19.937798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.554 [2024-07-24 23:11:19.938362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.554 [2024-07-24 23:11:19.938607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.554 [2024-07-24 23:11:19.938918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.554 [2024-07-24 23:11:19.939001] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.554 [2024-07-24 23:11:19.939089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.554 [2024-07-24 23:11:19.994317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:58.489 [2024-07-24 23:11:20.908064] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 ************************************ 00:10:58.489 START TEST lvs_grow_clean 00:10:58.489 ************************************ 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:58.489 23:11:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:59.056 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:59.056 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:59.314 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:10:59.314 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:59.314 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:10:59.573 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:59.573 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:59.573 23:11:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b lvol 150 00:10:59.834 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=302740d1-f142-45e8-8e96-cac99a785a8e 00:10:59.834 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:59.834 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:00.129 [2024-07-24 23:11:22.392909] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:00.129 [2024-07-24 23:11:22.393493] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:00.129 true 00:11:00.129 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:00.129 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:00.388 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:00.388 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:00.647 23:11:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 302740d1-f142-45e8-8e96-cac99a785a8e 00:11:00.905 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:00.905 [2024-07-24 23:11:23.366480] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.905 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66054 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66054 /var/tmp/bdevperf.sock 00:11:01.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 66054 ']' 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:01.163 23:11:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:01.420 [2024-07-24 23:11:23.651332] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:01.420 [2024-07-24 23:11:23.651626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66054 ] 00:11:01.420 [2024-07-24 23:11:23.787186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.677 [2024-07-24 23:11:23.954189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.677 [2024-07-24 23:11:24.032968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:02.243 23:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.243 23:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:02.243 23:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:02.501 Nvme0n1 00:11:02.501 23:11:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:02.758 [ 00:11:02.758 { 00:11:02.758 "name": "Nvme0n1", 00:11:02.758 "aliases": [ 00:11:02.759 "302740d1-f142-45e8-8e96-cac99a785a8e" 00:11:02.759 ], 00:11:02.759 "product_name": "NVMe disk", 00:11:02.759 "block_size": 4096, 00:11:02.759 "num_blocks": 38912, 00:11:02.759 "uuid": "302740d1-f142-45e8-8e96-cac99a785a8e", 00:11:02.759 "assigned_rate_limits": { 00:11:02.759 "rw_ios_per_sec": 0, 00:11:02.759 "rw_mbytes_per_sec": 0, 00:11:02.759 "r_mbytes_per_sec": 0, 00:11:02.759 "w_mbytes_per_sec": 0 00:11:02.759 }, 00:11:02.759 "claimed": false, 00:11:02.759 "zoned": false, 00:11:02.759 "supported_io_types": { 00:11:02.759 "read": true, 00:11:02.759 "write": true, 00:11:02.759 "unmap": true, 00:11:02.759 "flush": true, 00:11:02.759 "reset": true, 00:11:02.759 "nvme_admin": true, 00:11:02.759 "nvme_io": true, 00:11:02.759 "nvme_io_md": false, 00:11:02.759 "write_zeroes": true, 00:11:02.759 "zcopy": false, 00:11:02.759 "get_zone_info": false, 00:11:02.759 "zone_management": false, 00:11:02.759 "zone_append": false, 00:11:02.759 "compare": true, 00:11:02.759 "compare_and_write": true, 00:11:02.759 "abort": true, 00:11:02.759 "seek_hole": false, 00:11:02.759 "seek_data": false, 00:11:02.759 "copy": true, 00:11:02.759 "nvme_iov_md": false 00:11:02.759 }, 00:11:02.759 "memory_domains": [ 00:11:02.759 { 00:11:02.759 "dma_device_id": "system", 00:11:02.759 "dma_device_type": 1 00:11:02.759 } 00:11:02.759 ], 00:11:02.759 "driver_specific": { 00:11:02.759 "nvme": [ 00:11:02.759 { 00:11:02.759 "trid": { 00:11:02.759 "trtype": "TCP", 00:11:02.759 "adrfam": "IPv4", 00:11:02.759 "traddr": "10.0.0.2", 00:11:02.759 "trsvcid": "4420", 00:11:02.759 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:02.759 }, 00:11:02.759 "ctrlr_data": { 00:11:02.759 "cntlid": 1, 00:11:02.759 "vendor_id": "0x8086", 00:11:02.759 "model_number": "SPDK bdev Controller", 00:11:02.759 "serial_number": "SPDK0", 00:11:02.759 "firmware_revision": "24.09", 00:11:02.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:02.759 "oacs": { 00:11:02.759 "security": 0, 00:11:02.759 "format": 0, 00:11:02.759 "firmware": 0, 00:11:02.759 "ns_manage": 0 00:11:02.759 }, 00:11:02.759 "multi_ctrlr": true, 00:11:02.759 "ana_reporting": false 00:11:02.759 }, 00:11:02.759 "vs": { 00:11:02.759 "nvme_version": "1.3" 00:11:02.759 }, 00:11:02.759 "ns_data": { 00:11:02.759 "id": 1, 00:11:02.759 "can_share": true 00:11:02.759 } 00:11:02.759 } 00:11:02.759 ], 00:11:02.759 "mp_policy": "active_passive" 00:11:02.759 } 00:11:02.759 } 00:11:02.759 ] 00:11:02.759 23:11:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66083 00:11:02.759 23:11:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:02.759 23:11:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:03.038 Running I/O for 10 seconds... 00:11:03.985 Latency(us) 00:11:03.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.985 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:11:03.985 =================================================================================================================== 00:11:03.985 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:11:03.985 00:11:04.920 23:11:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:04.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.920 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:11:04.920 =================================================================================================================== 00:11:04.920 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:11:04.920 00:11:05.178 true 00:11:05.178 23:11:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:05.178 23:11:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:05.436 23:11:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:05.436 23:11:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:05.436 23:11:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66083 00:11:06.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.002 Nvme0n1 : 3.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:11:06.002 =================================================================================================================== 00:11:06.002 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:11:06.002 00:11:06.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.941 Nvme0n1 : 4.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:11:06.941 =================================================================================================================== 00:11:06.941 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:11:06.941 00:11:07.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.891 Nvme0n1 : 5.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:11:07.891 =================================================================================================================== 00:11:07.891 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:11:07.891 00:11:09.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.269 Nvme0n1 : 6.00 7471.83 29.19 0.00 0.00 0.00 0.00 0.00 00:11:09.269 =================================================================================================================== 00:11:09.269 Total : 7471.83 29.19 0.00 0.00 0.00 0.00 0.00 00:11:09.269 00:11:10.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.220 Nvme0n1 : 7.00 7438.57 29.06 0.00 0.00 0.00 0.00 0.00 00:11:10.220 =================================================================================================================== 00:11:10.220 Total : 7438.57 29.06 0.00 0.00 0.00 0.00 0.00 00:11:10.220 00:11:11.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.179 Nvme0n1 : 8.00 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:11:11.179 =================================================================================================================== 00:11:11.179 Total : 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:11:11.179 00:11:12.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.113 Nvme0n1 : 9.00 7351.89 28.72 0.00 0.00 0.00 0.00 0.00 00:11:12.113 =================================================================================================================== 00:11:12.113 Total : 7351.89 28.72 0.00 0.00 0.00 0.00 0.00 00:11:12.113 00:11:13.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.050 Nvme0n1 : 10.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:11:13.050 =================================================================================================================== 00:11:13.051 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:11:13.051 00:11:13.051 00:11:13.051 Latency(us) 00:11:13.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.051 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.051 Nvme0n1 : 10.01 7310.76 28.56 0.00 0.00 17503.38 14000.87 39798.23 00:11:13.051 =================================================================================================================== 00:11:13.051 Total : 7310.76 28.56 0.00 0.00 17503.38 14000.87 39798.23 00:11:13.051 0 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66054 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 66054 ']' 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 66054 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66054 00:11:13.051 killing process with pid 66054 00:11:13.051 Received shutdown signal, test time was about 10.000000 seconds 00:11:13.051 00:11:13.051 Latency(us) 00:11:13.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.051 =================================================================================================================== 00:11:13.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66054' 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 66054 00:11:13.051 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 66054 00:11:13.309 23:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.568 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:13.840 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:13.840 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:14.103 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:14.103 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:14.103 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:14.362 [2024-07-24 23:11:36.816821] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:14.621 23:11:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:14.879 request: 00:11:14.879 { 00:11:14.879 "uuid": "1cb0de79-76d3-4504-b57d-77d9acf7b75b", 00:11:14.879 "method": "bdev_lvol_get_lvstores", 00:11:14.879 "req_id": 1 00:11:14.879 } 00:11:14.879 Got JSON-RPC error response 00:11:14.879 response: 00:11:14.879 { 00:11:14.879 "code": -19, 00:11:14.879 "message": "No such device" 00:11:14.879 } 00:11:14.879 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:14.879 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:14.879 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:14.879 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:14.879 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:15.137 aio_bdev 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 302740d1-f142-45e8-8e96-cac99a785a8e 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=302740d1-f142-45e8-8e96-cac99a785a8e 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:15.137 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:15.396 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 302740d1-f142-45e8-8e96-cac99a785a8e -t 2000 00:11:15.655 [ 00:11:15.655 { 00:11:15.655 "name": "302740d1-f142-45e8-8e96-cac99a785a8e", 00:11:15.655 "aliases": [ 00:11:15.655 "lvs/lvol" 00:11:15.655 ], 00:11:15.655 "product_name": "Logical Volume", 00:11:15.655 "block_size": 4096, 00:11:15.655 "num_blocks": 38912, 00:11:15.655 "uuid": "302740d1-f142-45e8-8e96-cac99a785a8e", 00:11:15.655 "assigned_rate_limits": { 00:11:15.655 "rw_ios_per_sec": 0, 00:11:15.655 "rw_mbytes_per_sec": 0, 00:11:15.655 "r_mbytes_per_sec": 0, 00:11:15.655 "w_mbytes_per_sec": 0 00:11:15.655 }, 00:11:15.655 "claimed": false, 00:11:15.655 "zoned": false, 00:11:15.655 "supported_io_types": { 00:11:15.655 "read": true, 00:11:15.655 "write": true, 00:11:15.655 "unmap": true, 00:11:15.655 "flush": false, 00:11:15.655 "reset": true, 00:11:15.655 "nvme_admin": false, 00:11:15.655 "nvme_io": false, 00:11:15.655 "nvme_io_md": false, 00:11:15.655 "write_zeroes": true, 00:11:15.655 "zcopy": false, 00:11:15.655 "get_zone_info": false, 00:11:15.655 "zone_management": false, 00:11:15.655 "zone_append": false, 00:11:15.655 "compare": false, 00:11:15.655 "compare_and_write": false, 00:11:15.655 "abort": false, 00:11:15.655 "seek_hole": true, 00:11:15.655 "seek_data": true, 00:11:15.655 "copy": false, 00:11:15.655 "nvme_iov_md": false 00:11:15.655 }, 00:11:15.655 "driver_specific": { 00:11:15.655 "lvol": { 00:11:15.655 "lvol_store_uuid": "1cb0de79-76d3-4504-b57d-77d9acf7b75b", 00:11:15.655 "base_bdev": "aio_bdev", 00:11:15.655 "thin_provision": false, 00:11:15.655 "num_allocated_clusters": 38, 00:11:15.655 "snapshot": false, 00:11:15.655 "clone": false, 00:11:15.655 "esnap_clone": false 00:11:15.655 } 00:11:15.655 } 00:11:15.655 } 00:11:15.655 ] 00:11:15.655 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:11:15.655 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:15.655 23:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:15.915 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:15.915 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:15.915 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:16.174 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:16.174 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 302740d1-f142-45e8-8e96-cac99a785a8e 00:11:16.432 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1cb0de79-76d3-4504-b57d-77d9acf7b75b 00:11:16.691 23:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:16.949 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:17.208 ************************************ 00:11:17.208 END TEST lvs_grow_clean 00:11:17.208 ************************************ 00:11:17.208 00:11:17.208 real 0m18.623s 00:11:17.208 user 0m17.342s 00:11:17.208 sys 0m2.763s 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:17.208 ************************************ 00:11:17.208 START TEST lvs_grow_dirty 00:11:17.208 ************************************ 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:17.208 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:17.467 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:17.467 23:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:18.034 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:18.034 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:18.034 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:18.034 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:18.034 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:18.034 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a566d7f5-bde2-4aab-a937-abb832e622b3 lvol 150 00:11:18.292 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:18.292 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:18.292 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:18.551 [2024-07-24 23:11:40.922587] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:18.551 [2024-07-24 23:11:40.922684] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:18.551 true 00:11:18.551 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:18.551 23:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:18.809 23:11:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:18.809 23:11:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:19.068 23:11:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:19.326 23:11:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:19.584 [2024-07-24 23:11:41.867116] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.584 23:11:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66337 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66337 /var/tmp/bdevperf.sock 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66337 ']' 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:19.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.842 23:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:19.842 [2024-07-24 23:11:42.191778] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:19.842 [2024-07-24 23:11:42.192155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66337 ] 00:11:20.100 [2024-07-24 23:11:42.329512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.100 [2024-07-24 23:11:42.479094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.100 [2024-07-24 23:11:42.563561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:21.047 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.047 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:21.047 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:21.047 Nvme0n1 00:11:21.047 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:21.307 [ 00:11:21.307 { 00:11:21.307 "name": "Nvme0n1", 00:11:21.307 "aliases": [ 00:11:21.307 "f373c2c0-1372-4ca4-8e97-0143aec0e122" 00:11:21.307 ], 00:11:21.307 "product_name": "NVMe disk", 00:11:21.307 "block_size": 4096, 00:11:21.307 "num_blocks": 38912, 00:11:21.307 "uuid": "f373c2c0-1372-4ca4-8e97-0143aec0e122", 00:11:21.307 "assigned_rate_limits": { 00:11:21.307 "rw_ios_per_sec": 0, 00:11:21.307 "rw_mbytes_per_sec": 0, 00:11:21.307 "r_mbytes_per_sec": 0, 00:11:21.307 "w_mbytes_per_sec": 0 00:11:21.307 }, 00:11:21.307 "claimed": false, 00:11:21.307 "zoned": false, 00:11:21.307 "supported_io_types": { 00:11:21.307 "read": true, 00:11:21.307 "write": true, 00:11:21.307 "unmap": true, 00:11:21.307 "flush": true, 00:11:21.307 "reset": true, 00:11:21.307 "nvme_admin": true, 00:11:21.307 "nvme_io": true, 00:11:21.307 "nvme_io_md": false, 00:11:21.307 "write_zeroes": true, 00:11:21.307 "zcopy": false, 00:11:21.307 "get_zone_info": false, 00:11:21.307 "zone_management": false, 00:11:21.307 "zone_append": false, 00:11:21.307 "compare": true, 00:11:21.307 "compare_and_write": true, 00:11:21.307 "abort": true, 00:11:21.307 "seek_hole": false, 00:11:21.307 "seek_data": false, 00:11:21.307 "copy": true, 00:11:21.307 "nvme_iov_md": false 00:11:21.307 }, 00:11:21.307 "memory_domains": [ 00:11:21.307 { 00:11:21.307 "dma_device_id": "system", 00:11:21.307 "dma_device_type": 1 00:11:21.307 } 00:11:21.307 ], 00:11:21.307 "driver_specific": { 00:11:21.307 "nvme": [ 00:11:21.307 { 00:11:21.307 "trid": { 00:11:21.307 "trtype": "TCP", 00:11:21.307 "adrfam": "IPv4", 00:11:21.307 "traddr": "10.0.0.2", 00:11:21.307 "trsvcid": "4420", 00:11:21.307 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:21.307 }, 00:11:21.307 "ctrlr_data": { 00:11:21.307 "cntlid": 1, 00:11:21.307 "vendor_id": "0x8086", 00:11:21.307 "model_number": "SPDK bdev Controller", 00:11:21.307 "serial_number": "SPDK0", 00:11:21.307 "firmware_revision": "24.09", 00:11:21.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:21.307 "oacs": { 00:11:21.307 "security": 0, 00:11:21.307 "format": 0, 00:11:21.307 "firmware": 0, 00:11:21.307 "ns_manage": 0 00:11:21.307 }, 00:11:21.307 "multi_ctrlr": true, 00:11:21.307 "ana_reporting": false 00:11:21.307 }, 00:11:21.307 "vs": { 00:11:21.307 "nvme_version": "1.3" 00:11:21.307 }, 00:11:21.307 "ns_data": { 00:11:21.307 "id": 1, 00:11:21.307 "can_share": true 00:11:21.307 } 00:11:21.307 } 00:11:21.307 ], 00:11:21.307 "mp_policy": "active_passive" 00:11:21.307 } 00:11:21.307 } 00:11:21.307 ] 00:11:21.307 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:21.307 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66360 00:11:21.307 23:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:21.565 Running I/O for 10 seconds... 00:11:22.502 Latency(us) 00:11:22.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.502 Nvme0n1 : 1.00 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:11:22.502 =================================================================================================================== 00:11:22.502 Total : 7747.00 30.26 0.00 0.00 0.00 0.00 0.00 00:11:22.502 00:11:23.437 23:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:23.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.437 Nvme0n1 : 2.00 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:11:23.437 =================================================================================================================== 00:11:23.437 Total : 7683.50 30.01 0.00 0.00 0.00 0.00 0.00 00:11:23.437 00:11:23.696 true 00:11:23.696 23:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:23.696 23:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:23.955 23:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:23.955 23:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:23.955 23:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66360 00:11:24.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.521 Nvme0n1 : 3.00 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:11:24.521 =================================================================================================================== 00:11:24.521 Total : 7662.33 29.93 0.00 0.00 0.00 0.00 0.00 00:11:24.521 00:11:25.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:25.456 Nvme0n1 : 4.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:11:25.456 =================================================================================================================== 00:11:25.456 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:11:25.456 00:11:26.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:26.391 Nvme0n1 : 5.00 7594.60 29.67 0.00 0.00 0.00 0.00 0.00 00:11:26.391 =================================================================================================================== 00:11:26.391 Total : 7594.60 29.67 0.00 0.00 0.00 0.00 0.00 00:11:26.391 00:11:27.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:27.766 Nvme0n1 : 6.00 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:11:27.766 =================================================================================================================== 00:11:27.766 Total : 7577.67 29.60 0.00 0.00 0.00 0.00 0.00 00:11:27.766 00:11:28.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.700 Nvme0n1 : 7.00 7530.14 29.41 0.00 0.00 0.00 0.00 0.00 00:11:28.700 =================================================================================================================== 00:11:28.700 Total : 7530.14 29.41 0.00 0.00 0.00 0.00 0.00 00:11:28.700 00:11:29.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.634 Nvme0n1 : 8.00 7477.88 29.21 0.00 0.00 0.00 0.00 0.00 00:11:29.634 =================================================================================================================== 00:11:29.634 Total : 7477.88 29.21 0.00 0.00 0.00 0.00 0.00 00:11:29.634 00:11:30.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.567 Nvme0n1 : 9.00 7437.22 29.05 0.00 0.00 0.00 0.00 0.00 00:11:30.567 =================================================================================================================== 00:11:30.567 Total : 7437.22 29.05 0.00 0.00 0.00 0.00 0.00 00:11:30.567 00:11:31.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.498 Nvme0n1 : 10.00 7404.70 28.92 0.00 0.00 0.00 0.00 0.00 00:11:31.498 =================================================================================================================== 00:11:31.498 Total : 7404.70 28.92 0.00 0.00 0.00 0.00 0.00 00:11:31.498 00:11:31.498 00:11:31.498 Latency(us) 00:11:31.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.498 Nvme0n1 : 10.01 7409.51 28.94 0.00 0.00 17269.29 3872.58 37891.72 00:11:31.498 =================================================================================================================== 00:11:31.498 Total : 7409.51 28.94 0.00 0.00 17269.29 3872.58 37891.72 00:11:31.498 0 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66337 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66337 ']' 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66337 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66337 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66337' 00:11:31.498 killing process with pid 66337 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66337 00:11:31.498 Received shutdown signal, test time was about 10.000000 seconds 00:11:31.498 00:11:31.498 Latency(us) 00:11:31.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.498 =================================================================================================================== 00:11:31.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.498 23:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66337 00:11:31.755 23:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:32.321 23:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:32.321 23:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:32.321 23:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:32.606 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:32.606 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:32.606 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65975 00:11:32.606 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65975 00:11:32.864 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65975 Killed "${NVMF_APP[@]}" "$@" 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:32.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66493 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66493 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66493 ']' 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.865 23:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:32.865 [2024-07-24 23:11:55.134786] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:32.865 [2024-07-24 23:11:55.134893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.865 [2024-07-24 23:11:55.274287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.123 [2024-07-24 23:11:55.393605] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.123 [2024-07-24 23:11:55.393660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.123 [2024-07-24 23:11:55.393689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.123 [2024-07-24 23:11:55.393697] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.123 [2024-07-24 23:11:55.393704] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.123 [2024-07-24 23:11:55.393734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.123 [2024-07-24 23:11:55.449151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.689 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:33.947 [2024-07-24 23:11:56.409708] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:33.947 [2024-07-24 23:11:56.410303] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:33.947 [2024-07-24 23:11:56.410598] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:34.206 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:34.464 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f373c2c0-1372-4ca4-8e97-0143aec0e122 -t 2000 00:11:34.464 [ 00:11:34.464 { 00:11:34.464 "name": "f373c2c0-1372-4ca4-8e97-0143aec0e122", 00:11:34.464 "aliases": [ 00:11:34.464 "lvs/lvol" 00:11:34.464 ], 00:11:34.464 "product_name": "Logical Volume", 00:11:34.464 "block_size": 4096, 00:11:34.464 "num_blocks": 38912, 00:11:34.464 "uuid": "f373c2c0-1372-4ca4-8e97-0143aec0e122", 00:11:34.464 "assigned_rate_limits": { 00:11:34.464 "rw_ios_per_sec": 0, 00:11:34.464 "rw_mbytes_per_sec": 0, 00:11:34.464 "r_mbytes_per_sec": 0, 00:11:34.464 "w_mbytes_per_sec": 0 00:11:34.464 }, 00:11:34.464 "claimed": false, 00:11:34.464 "zoned": false, 00:11:34.464 "supported_io_types": { 00:11:34.464 "read": true, 00:11:34.464 "write": true, 00:11:34.464 "unmap": true, 00:11:34.464 "flush": false, 00:11:34.464 "reset": true, 00:11:34.464 "nvme_admin": false, 00:11:34.464 "nvme_io": false, 00:11:34.464 "nvme_io_md": false, 00:11:34.464 "write_zeroes": true, 00:11:34.464 "zcopy": false, 00:11:34.464 "get_zone_info": false, 00:11:34.464 "zone_management": false, 00:11:34.464 "zone_append": false, 00:11:34.464 "compare": false, 00:11:34.464 "compare_and_write": false, 00:11:34.464 "abort": false, 00:11:34.464 "seek_hole": true, 00:11:34.464 "seek_data": true, 00:11:34.464 "copy": false, 00:11:34.464 "nvme_iov_md": false 00:11:34.464 }, 00:11:34.464 "driver_specific": { 00:11:34.464 "lvol": { 00:11:34.464 "lvol_store_uuid": "a566d7f5-bde2-4aab-a937-abb832e622b3", 00:11:34.464 "base_bdev": "aio_bdev", 00:11:34.464 "thin_provision": false, 00:11:34.464 "num_allocated_clusters": 38, 00:11:34.464 "snapshot": false, 00:11:34.464 "clone": false, 00:11:34.464 "esnap_clone": false 00:11:34.464 } 00:11:34.464 } 00:11:34.464 } 00:11:34.464 ] 00:11:34.464 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:34.464 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:34.464 23:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:35.030 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:35.031 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:35.031 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:35.031 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:35.031 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:35.288 [2024-07-24 23:11:57.686612] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.288 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:35.289 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.289 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:35.289 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:35.578 request: 00:11:35.578 { 00:11:35.578 "uuid": "a566d7f5-bde2-4aab-a937-abb832e622b3", 00:11:35.578 "method": "bdev_lvol_get_lvstores", 00:11:35.578 "req_id": 1 00:11:35.578 } 00:11:35.578 Got JSON-RPC error response 00:11:35.578 response: 00:11:35.578 { 00:11:35.578 "code": -19, 00:11:35.578 "message": "No such device" 00:11:35.578 } 00:11:35.578 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:11:35.578 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:35.578 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:35.578 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:35.579 23:11:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:35.849 aio_bdev 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:35.849 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:36.108 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f373c2c0-1372-4ca4-8e97-0143aec0e122 -t 2000 00:11:36.366 [ 00:11:36.366 { 00:11:36.366 "name": "f373c2c0-1372-4ca4-8e97-0143aec0e122", 00:11:36.366 "aliases": [ 00:11:36.366 "lvs/lvol" 00:11:36.366 ], 00:11:36.366 "product_name": "Logical Volume", 00:11:36.366 "block_size": 4096, 00:11:36.366 "num_blocks": 38912, 00:11:36.366 "uuid": "f373c2c0-1372-4ca4-8e97-0143aec0e122", 00:11:36.366 "assigned_rate_limits": { 00:11:36.366 "rw_ios_per_sec": 0, 00:11:36.366 "rw_mbytes_per_sec": 0, 00:11:36.366 "r_mbytes_per_sec": 0, 00:11:36.366 "w_mbytes_per_sec": 0 00:11:36.366 }, 00:11:36.366 "claimed": false, 00:11:36.366 "zoned": false, 00:11:36.366 "supported_io_types": { 00:11:36.366 "read": true, 00:11:36.366 "write": true, 00:11:36.366 "unmap": true, 00:11:36.366 "flush": false, 00:11:36.366 "reset": true, 00:11:36.366 "nvme_admin": false, 00:11:36.366 "nvme_io": false, 00:11:36.366 "nvme_io_md": false, 00:11:36.366 "write_zeroes": true, 00:11:36.366 "zcopy": false, 00:11:36.366 "get_zone_info": false, 00:11:36.366 "zone_management": false, 00:11:36.366 "zone_append": false, 00:11:36.366 "compare": false, 00:11:36.366 "compare_and_write": false, 00:11:36.366 "abort": false, 00:11:36.366 "seek_hole": true, 00:11:36.366 "seek_data": true, 00:11:36.366 "copy": false, 00:11:36.366 "nvme_iov_md": false 00:11:36.366 }, 00:11:36.366 "driver_specific": { 00:11:36.366 "lvol": { 00:11:36.366 "lvol_store_uuid": "a566d7f5-bde2-4aab-a937-abb832e622b3", 00:11:36.366 "base_bdev": "aio_bdev", 00:11:36.366 "thin_provision": false, 00:11:36.366 "num_allocated_clusters": 38, 00:11:36.366 "snapshot": false, 00:11:36.366 "clone": false, 00:11:36.366 "esnap_clone": false 00:11:36.366 } 00:11:36.366 } 00:11:36.366 } 00:11:36.366 ] 00:11:36.366 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:11:36.366 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:36.366 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:36.625 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:36.625 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:36.625 23:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:36.882 23:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:36.882 23:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f373c2c0-1372-4ca4-8e97-0143aec0e122 00:11:37.140 23:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a566d7f5-bde2-4aab-a937-abb832e622b3 00:11:37.397 23:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:37.653 23:11:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:37.911 ************************************ 00:11:37.911 END TEST lvs_grow_dirty 00:11:37.911 ************************************ 00:11:37.911 00:11:37.911 real 0m20.754s 00:11:37.911 user 0m43.677s 00:11:37.911 sys 0m8.304s 00:11:37.911 23:12:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.911 23:12:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:38.170 nvmf_trace.0 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.170 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:38.428 rmmod nvme_tcp 00:11:38.428 rmmod nvme_fabrics 00:11:38.428 rmmod nvme_keyring 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66493 ']' 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66493 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66493 ']' 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66493 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66493 00:11:38.428 killing process with pid 66493 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66493' 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66493 00:11:38.428 23:12:00 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66493 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.686 23:12:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:38.686 ************************************ 00:11:38.686 END TEST nvmf_lvs_grow 00:11:38.687 ************************************ 00:11:38.687 00:11:38.687 real 0m41.867s 00:11:38.687 user 1m7.393s 00:11:38.687 sys 0m11.874s 00:11:38.687 23:12:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:38.687 23:12:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.687 23:12:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:38.687 23:12:01 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:38.687 23:12:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:38.687 23:12:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.687 23:12:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:38.687 ************************************ 00:11:38.687 START TEST nvmf_bdev_io_wait 00:11:38.687 ************************************ 00:11:38.687 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:38.946 * Looking for test storage... 00:11:38.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.946 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:38.947 Cannot find device "nvmf_tgt_br" 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:38.947 Cannot find device "nvmf_tgt_br2" 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:38.947 Cannot find device "nvmf_tgt_br" 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:38.947 Cannot find device "nvmf_tgt_br2" 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:38.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:38.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:38.947 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:39.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:11:39.206 00:11:39.206 --- 10.0.0.2 ping statistics --- 00:11:39.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.206 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:39.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:39.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:11:39.206 00:11:39.206 --- 10.0.0.3 ping statistics --- 00:11:39.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.206 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:39.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:39.206 00:11:39.206 --- 10.0.0.1 ping statistics --- 00:11:39.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.206 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66817 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66817 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66817 ']' 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.206 23:12:01 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:39.206 [2024-07-24 23:12:01.645852] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:39.206 [2024-07-24 23:12:01.646230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.477 [2024-07-24 23:12:01.788273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.478 [2024-07-24 23:12:01.904866] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.478 [2024-07-24 23:12:01.905109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.478 [2024-07-24 23:12:01.905302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.478 [2024-07-24 23:12:01.905467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.478 [2024-07-24 23:12:01.905565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.478 [2024-07-24 23:12:01.905767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.478 [2024-07-24 23:12:01.906009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.478 [2024-07-24 23:12:01.906310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.478 [2024-07-24 23:12:01.906319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.412 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.412 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:11:40.412 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.412 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 [2024-07-24 23:12:02.721026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 [2024-07-24 23:12:02.737791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 Malloc0 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.413 [2024-07-24 23:12:02.802731] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66852 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66854 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:40.413 { 00:11:40.413 "params": { 00:11:40.413 "name": "Nvme$subsystem", 00:11:40.413 "trtype": "$TEST_TRANSPORT", 00:11:40.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.413 "adrfam": "ipv4", 00:11:40.413 "trsvcid": "$NVMF_PORT", 00:11:40.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.413 "hdgst": ${hdgst:-false}, 00:11:40.413 "ddgst": ${ddgst:-false} 00:11:40.413 }, 00:11:40.413 "method": "bdev_nvme_attach_controller" 00:11:40.413 } 00:11:40.413 EOF 00:11:40.413 )") 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66856 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:40.413 { 00:11:40.413 "params": { 00:11:40.413 "name": "Nvme$subsystem", 00:11:40.413 "trtype": "$TEST_TRANSPORT", 00:11:40.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.413 "adrfam": "ipv4", 00:11:40.413 "trsvcid": "$NVMF_PORT", 00:11:40.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.413 "hdgst": ${hdgst:-false}, 00:11:40.413 "ddgst": ${ddgst:-false} 00:11:40.413 }, 00:11:40.413 "method": "bdev_nvme_attach_controller" 00:11:40.413 } 00:11:40.413 EOF 00:11:40.413 )") 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66859 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:40.413 { 00:11:40.413 "params": { 00:11:40.413 "name": "Nvme$subsystem", 00:11:40.413 "trtype": "$TEST_TRANSPORT", 00:11:40.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.413 "adrfam": "ipv4", 00:11:40.413 "trsvcid": "$NVMF_PORT", 00:11:40.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.413 "hdgst": ${hdgst:-false}, 00:11:40.413 "ddgst": ${ddgst:-false} 00:11:40.413 }, 00:11:40.413 "method": "bdev_nvme_attach_controller" 00:11:40.413 } 00:11:40.413 EOF 00:11:40.413 )") 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:40.413 { 00:11:40.413 "params": { 00:11:40.413 "name": "Nvme$subsystem", 00:11:40.413 "trtype": "$TEST_TRANSPORT", 00:11:40.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.413 "adrfam": "ipv4", 00:11:40.413 "trsvcid": "$NVMF_PORT", 00:11:40.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.413 "hdgst": ${hdgst:-false}, 00:11:40.413 "ddgst": ${ddgst:-false} 00:11:40.413 }, 00:11:40.413 "method": "bdev_nvme_attach_controller" 00:11:40.413 } 00:11:40.413 EOF 00:11:40.413 )") 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:40.413 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:40.413 "params": { 00:11:40.413 "name": "Nvme1", 00:11:40.413 "trtype": "tcp", 00:11:40.413 "traddr": "10.0.0.2", 00:11:40.413 "adrfam": "ipv4", 00:11:40.413 "trsvcid": "4420", 00:11:40.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.414 "hdgst": false, 00:11:40.414 "ddgst": false 00:11:40.414 }, 00:11:40.414 "method": "bdev_nvme_attach_controller" 00:11:40.414 }' 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:40.414 "params": { 00:11:40.414 "name": "Nvme1", 00:11:40.414 "trtype": "tcp", 00:11:40.414 "traddr": "10.0.0.2", 00:11:40.414 "adrfam": "ipv4", 00:11:40.414 "trsvcid": "4420", 00:11:40.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.414 "hdgst": false, 00:11:40.414 "ddgst": false 00:11:40.414 }, 00:11:40.414 "method": "bdev_nvme_attach_controller" 00:11:40.414 }' 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:40.414 "params": { 00:11:40.414 "name": "Nvme1", 00:11:40.414 "trtype": "tcp", 00:11:40.414 "traddr": "10.0.0.2", 00:11:40.414 "adrfam": "ipv4", 00:11:40.414 "trsvcid": "4420", 00:11:40.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.414 "hdgst": false, 00:11:40.414 "ddgst": false 00:11:40.414 }, 00:11:40.414 "method": "bdev_nvme_attach_controller" 00:11:40.414 }' 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:40.414 "params": { 00:11:40.414 "name": "Nvme1", 00:11:40.414 "trtype": "tcp", 00:11:40.414 "traddr": "10.0.0.2", 00:11:40.414 "adrfam": "ipv4", 00:11:40.414 "trsvcid": "4420", 00:11:40.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.414 "hdgst": false, 00:11:40.414 "ddgst": false 00:11:40.414 }, 00:11:40.414 "method": "bdev_nvme_attach_controller" 00:11:40.414 }' 00:11:40.414 [2024-07-24 23:12:02.872687] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:40.414 [2024-07-24 23:12:02.872789] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:40.414 [2024-07-24 23:12:02.873951] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:40.414 [2024-07-24 23:12:02.874062] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:40.414 23:12:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66852 00:11:40.414 [2024-07-24 23:12:02.884721] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:40.414 [2024-07-24 23:12:02.885208] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:40.414 [2024-07-24 23:12:02.886152] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:40.414 [2024-07-24 23:12:02.886472] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:40.672 [2024-07-24 23:12:03.117837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.931 [2024-07-24 23:12:03.201510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.931 [2024-07-24 23:12:03.239936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:40.931 [2024-07-24 23:12:03.284799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.931 [2024-07-24 23:12:03.303509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:40.931 [2024-07-24 23:12:03.335235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:11:40.931 [2024-07-24 23:12:03.355344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.931 [2024-07-24 23:12:03.400600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:40.931 Running I/O for 1 seconds... 00:11:41.193 [2024-07-24 23:12:03.418813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:41.193 [2024-07-24 23:12:03.465309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:41.193 [2024-07-24 23:12:03.482112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:41.193 Running I/O for 1 seconds... 00:11:41.193 [2024-07-24 23:12:03.518283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:41.194 Running I/O for 1 seconds... 00:11:41.194 Running I/O for 1 seconds... 00:11:42.131 00:11:42.131 Latency(us) 00:11:42.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.131 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:42.131 Nvme1n1 : 1.00 172406.04 673.46 0.00 0.00 739.75 348.16 3723.64 00:11:42.131 =================================================================================================================== 00:11:42.131 Total : 172406.04 673.46 0.00 0.00 739.75 348.16 3723.64 00:11:42.131 00:11:42.131 Latency(us) 00:11:42.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.131 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:42.131 Nvme1n1 : 1.01 9915.78 38.73 0.00 0.00 12854.30 7685.59 19899.11 00:11:42.131 =================================================================================================================== 00:11:42.131 Total : 9915.78 38.73 0.00 0.00 12854.30 7685.59 19899.11 00:11:42.389 00:11:42.389 Latency(us) 00:11:42.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.389 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:42.389 Nvme1n1 : 1.01 6189.51 24.18 0.00 0.00 20537.40 7357.91 28835.84 00:11:42.389 =================================================================================================================== 00:11:42.389 Total : 6189.51 24.18 0.00 0.00 20537.40 7357.91 28835.84 00:11:42.389 00:11:42.389 Latency(us) 00:11:42.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.389 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:42.389 Nvme1n1 : 1.01 4417.31 17.26 0.00 0.00 28783.60 13107.20 43611.23 00:11:42.389 =================================================================================================================== 00:11:42.389 Total : 4417.31 17.26 0.00 0.00 28783.60 13107.20 43611.23 00:11:42.648 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66854 00:11:42.648 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66856 00:11:42.648 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66859 00:11:42.648 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.648 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.648 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.907 rmmod nvme_tcp 00:11:42.907 rmmod nvme_fabrics 00:11:42.907 rmmod nvme_keyring 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66817 ']' 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66817 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66817 ']' 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66817 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66817 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:42.907 killing process with pid 66817 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66817' 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66817 00:11:42.907 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66817 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:43.175 00:11:43.175 real 0m4.429s 00:11:43.175 user 0m19.760s 00:11:43.175 sys 0m2.290s 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.175 23:12:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:43.175 ************************************ 00:11:43.175 END TEST nvmf_bdev_io_wait 00:11:43.175 ************************************ 00:11:43.175 23:12:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:43.175 23:12:05 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:43.175 23:12:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.175 23:12:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.175 23:12:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:43.175 ************************************ 00:11:43.175 START TEST nvmf_queue_depth 00:11:43.175 ************************************ 00:11:43.175 23:12:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:43.435 * Looking for test storage... 00:11:43.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:43.435 Cannot find device "nvmf_tgt_br" 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.435 Cannot find device "nvmf_tgt_br2" 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:43.435 Cannot find device "nvmf_tgt_br" 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:43.435 Cannot find device "nvmf_tgt_br2" 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:43.435 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:43.436 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.694 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.694 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.694 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:43.694 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:43.695 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.695 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.695 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.695 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.695 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.695 23:12:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:43.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:43.695 00:11:43.695 --- 10.0.0.2 ping statistics --- 00:11:43.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.695 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:43.695 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.695 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:11:43.695 00:11:43.695 --- 10.0.0.3 ping statistics --- 00:11:43.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.695 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:43.695 00:11:43.695 --- 10.0.0.1 ping statistics --- 00:11:43.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.695 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=67097 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 67097 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67097 ']' 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.695 23:12:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.695 [2024-07-24 23:12:06.105354] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:43.695 [2024-07-24 23:12:06.105479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.953 [2024-07-24 23:12:06.246608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.953 [2024-07-24 23:12:06.384350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.953 [2024-07-24 23:12:06.384448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.953 [2024-07-24 23:12:06.384460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.953 [2024-07-24 23:12:06.384468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.953 [2024-07-24 23:12:06.384476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.953 [2024-07-24 23:12:06.384503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.211 [2024-07-24 23:12:06.465441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 [2024-07-24 23:12:07.047638] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 Malloc0 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 [2024-07-24 23:12:07.113987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67129 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67129 /var/tmp/bdevperf.sock 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67129 ']' 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.779 23:12:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:44.779 [2024-07-24 23:12:07.175291] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:44.779 [2024-07-24 23:12:07.175399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67129 ] 00:11:45.037 [2024-07-24 23:12:07.317362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.037 [2024-07-24 23:12:07.451591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.037 [2024-07-24 23:12:07.511624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:45.974 NVMe0n1 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.974 23:12:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:45.974 Running I/O for 10 seconds... 00:11:55.947 00:11:55.947 Latency(us) 00:11:55.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.947 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:55.947 Verification LBA range: start 0x0 length 0x4000 00:11:55.947 NVMe0n1 : 10.09 7983.88 31.19 0.00 0.00 127550.90 27763.43 124875.87 00:11:55.947 =================================================================================================================== 00:11:55.947 Total : 7983.88 31.19 0.00 0.00 127550.90 27763.43 124875.87 00:11:55.947 0 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67129 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67129 ']' 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67129 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67129 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:55.947 killing process with pid 67129 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67129' 00:11:55.947 Received shutdown signal, test time was about 10.000000 seconds 00:11:55.947 00:11:55.947 Latency(us) 00:11:55.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:55.947 =================================================================================================================== 00:11:55.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67129 00:11:55.947 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67129 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.205 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.205 rmmod nvme_tcp 00:11:56.463 rmmod nvme_fabrics 00:11:56.463 rmmod nvme_keyring 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 67097 ']' 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 67097 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67097 ']' 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67097 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67097 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:56.463 killing process with pid 67097 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67097' 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67097 00:11:56.463 23:12:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67097 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:56.722 00:11:56.722 real 0m13.552s 00:11:56.722 user 0m23.082s 00:11:56.722 sys 0m2.415s 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.722 23:12:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:56.722 ************************************ 00:11:56.722 END TEST nvmf_queue_depth 00:11:56.722 ************************************ 00:11:56.722 23:12:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:56.722 23:12:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:56.722 23:12:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:56.722 23:12:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.722 23:12:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:56.981 ************************************ 00:11:56.981 START TEST nvmf_target_multipath 00:11:56.981 ************************************ 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:56.981 * Looking for test storage... 00:11:56.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:56.981 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:56.982 Cannot find device "nvmf_tgt_br" 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:56.982 Cannot find device "nvmf_tgt_br2" 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:56.982 Cannot find device "nvmf_tgt_br" 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:56.982 Cannot find device "nvmf_tgt_br2" 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:56.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:56.982 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:56.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:57.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:57.242 00:11:57.242 --- 10.0.0.2 ping statistics --- 00:11:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.242 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:57.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:57.242 00:11:57.242 --- 10.0.0.3 ping statistics --- 00:11:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.242 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:11:57.242 00:11:57.242 --- 10.0.0.1 ping statistics --- 00:11:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.242 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67450 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67450 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67450 ']' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.242 23:12:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:57.500 [2024-07-24 23:12:19.761358] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:11:57.500 [2024-07-24 23:12:19.761447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.500 [2024-07-24 23:12:19.902765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.796 [2024-07-24 23:12:20.034583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.796 [2024-07-24 23:12:20.034652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.796 [2024-07-24 23:12:20.034665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.797 [2024-07-24 23:12:20.034676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.797 [2024-07-24 23:12:20.034685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.797 [2024-07-24 23:12:20.034860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.797 [2024-07-24 23:12:20.035464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.797 [2024-07-24 23:12:20.035590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.797 [2024-07-24 23:12:20.035606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.797 [2024-07-24 23:12:20.095337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.386 23:12:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:58.645 [2024-07-24 23:12:21.073935] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.645 23:12:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:58.904 Malloc0 00:11:58.904 23:12:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:59.469 23:12:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.469 23:12:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.727 [2024-07-24 23:12:22.122649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.727 23:12:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:59.985 [2024-07-24 23:12:22.367024] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:59.985 23:12:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:12:00.243 23:12:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:00.243 23:12:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.243 23:12:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:12:00.243 23:12:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.243 23:12:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:00.243 23:12:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67540 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:02.787 23:12:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:02.787 [global] 00:12:02.787 thread=1 00:12:02.787 invalidate=1 00:12:02.787 rw=randrw 00:12:02.787 time_based=1 00:12:02.787 runtime=6 00:12:02.787 ioengine=libaio 00:12:02.787 direct=1 00:12:02.787 bs=4096 00:12:02.787 iodepth=128 00:12:02.787 norandommap=0 00:12:02.787 numjobs=1 00:12:02.787 00:12:02.787 verify_dump=1 00:12:02.787 verify_backlog=512 00:12:02.787 verify_state_save=0 00:12:02.787 do_verify=1 00:12:02.787 verify=crc32c-intel 00:12:02.787 [job0] 00:12:02.787 filename=/dev/nvme0n1 00:12:02.787 Could not set queue depth (nvme0n1) 00:12:02.787 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:02.787 fio-3.35 00:12:02.787 Starting 1 thread 00:12:03.353 23:12:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:03.612 23:12:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:03.870 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:03.870 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:03.871 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:04.129 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:04.387 23:12:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67540 00:12:08.571 00:12:08.571 job0: (groupid=0, jobs=1): err= 0: pid=67566: Wed Jul 24 23:12:30 2024 00:12:08.571 read: IOPS=9551, BW=37.3MiB/s (39.1MB/s)(224MiB/6007msec) 00:12:08.571 slat (usec): min=6, max=6446, avg=62.20, stdev=246.83 00:12:08.571 clat (usec): min=1675, max=17841, avg=9203.77, stdev=1667.49 00:12:08.571 lat (usec): min=2066, max=17855, avg=9265.97, stdev=1671.40 00:12:08.571 clat percentiles (usec): 00:12:08.571 | 1.00th=[ 4752], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 8356], 00:12:08.571 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:12:08.571 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[13042], 00:12:08.571 | 99.00th=[14484], 99.50th=[15008], 99.90th=[16909], 99.95th=[16909], 00:12:08.571 | 99.99th=[17695] 00:12:08.571 bw ( KiB/s): min= 5624, max=25592, per=49.99%, avg=19100.00, stdev=5816.50, samples=12 00:12:08.571 iops : min= 1406, max= 6398, avg=4775.00, stdev=1454.13, samples=12 00:12:08.571 write: IOPS=5523, BW=21.6MiB/s (22.6MB/s)(113MiB/5223msec); 0 zone resets 00:12:08.571 slat (usec): min=15, max=3004, avg=73.17, stdev=178.69 00:12:08.571 clat (usec): min=2404, max=17822, avg=8108.41, stdev=1527.09 00:12:08.571 lat (usec): min=2431, max=17861, avg=8181.58, stdev=1533.27 00:12:08.571 clat percentiles (usec): 00:12:08.571 | 1.00th=[ 3621], 5.00th=[ 4686], 10.00th=[ 6521], 20.00th=[ 7439], 00:12:08.571 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:12:08.571 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9372], 95.00th=[10159], 00:12:08.571 | 99.00th=[12649], 99.50th=[13304], 99.90th=[15139], 99.95th=[15533], 00:12:08.571 | 99.99th=[16319] 00:12:08.571 bw ( KiB/s): min= 5968, max=25144, per=86.82%, avg=19182.00, stdev=5595.72, samples=12 00:12:08.571 iops : min= 1492, max= 6286, avg=4795.50, stdev=1398.93, samples=12 00:12:08.571 lat (msec) : 2=0.01%, 4=0.92%, 10=85.51%, 20=13.56% 00:12:08.571 cpu : usr=5.68%, sys=20.86%, ctx=5004, majf=0, minf=108 00:12:08.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:08.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:08.571 issued rwts: total=57373,28849,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:08.571 00:12:08.571 Run status group 0 (all jobs): 00:12:08.571 READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=224MiB (235MB), run=6007-6007msec 00:12:08.571 WRITE: bw=21.6MiB/s (22.6MB/s), 21.6MiB/s-21.6MiB/s (22.6MB/s-22.6MB/s), io=113MiB (118MB), run=5223-5223msec 00:12:08.571 00:12:08.571 Disk stats (read/write): 00:12:08.571 nvme0n1: ios=56572/28289, merge=0/0, ticks=499595/215491, in_queue=715086, util=98.73% 00:12:08.571 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:12:08.829 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67645 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:09.088 23:12:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:09.088 [global] 00:12:09.088 thread=1 00:12:09.088 invalidate=1 00:12:09.088 rw=randrw 00:12:09.088 time_based=1 00:12:09.088 runtime=6 00:12:09.088 ioengine=libaio 00:12:09.088 direct=1 00:12:09.088 bs=4096 00:12:09.088 iodepth=128 00:12:09.088 norandommap=0 00:12:09.088 numjobs=1 00:12:09.088 00:12:09.088 verify_dump=1 00:12:09.088 verify_backlog=512 00:12:09.088 verify_state_save=0 00:12:09.088 do_verify=1 00:12:09.088 verify=crc32c-intel 00:12:09.088 [job0] 00:12:09.088 filename=/dev/nvme0n1 00:12:09.346 Could not set queue depth (nvme0n1) 00:12:09.346 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.346 fio-3.35 00:12:09.346 Starting 1 thread 00:12:10.281 23:12:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:10.539 23:12:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:10.797 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:11.127 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:11.386 23:12:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67645 00:12:15.573 00:12:15.573 job0: (groupid=0, jobs=1): err= 0: pid=67666: Wed Jul 24 23:12:37 2024 00:12:15.573 read: IOPS=9559, BW=37.3MiB/s (39.2MB/s)(224MiB/6008msec) 00:12:15.573 slat (usec): min=5, max=6999, avg=53.29, stdev=227.48 00:12:15.573 clat (usec): min=429, max=20761, avg=9211.55, stdev=2284.34 00:12:15.573 lat (usec): min=443, max=20771, avg=9264.84, stdev=2290.92 00:12:15.573 clat percentiles (usec): 00:12:15.573 | 1.00th=[ 2507], 5.00th=[ 5080], 10.00th=[ 6652], 20.00th=[ 8094], 00:12:15.573 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:12:15.573 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11731], 95.00th=[13566], 00:12:15.573 | 99.00th=[15664], 99.50th=[16909], 99.90th=[18744], 99.95th=[19530], 00:12:15.573 | 99.99th=[20579] 00:12:15.573 bw ( KiB/s): min= 7760, max=25824, per=51.33%, avg=19628.36, stdev=6596.90, samples=11 00:12:15.573 iops : min= 1940, max= 6456, avg=4907.09, stdev=1649.23, samples=11 00:12:15.573 write: IOPS=5734, BW=22.4MiB/s (23.5MB/s)(118MiB/5267msec); 0 zone resets 00:12:15.573 slat (usec): min=13, max=3249, avg=61.77, stdev=155.29 00:12:15.573 clat (usec): min=633, max=18565, avg=7737.44, stdev=1906.22 00:12:15.573 lat (usec): min=682, max=18588, avg=7799.21, stdev=1916.09 00:12:15.573 clat percentiles (usec): 00:12:15.573 | 1.00th=[ 2737], 5.00th=[ 3982], 10.00th=[ 4883], 20.00th=[ 6456], 00:12:15.573 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:12:15.573 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10159], 00:12:15.573 | 99.00th=[12911], 99.50th=[13698], 99.90th=[15139], 99.95th=[15533], 00:12:15.573 | 99.99th=[18220] 00:12:15.573 bw ( KiB/s): min= 7912, max=25832, per=85.79%, avg=19677.09, stdev=6461.01, samples=11 00:12:15.573 iops : min= 1978, max= 6458, avg=4919.27, stdev=1615.25, samples=11 00:12:15.573 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.04% 00:12:15.573 lat (msec) : 2=0.41%, 4=2.69%, 10=78.26%, 20=18.55%, 50=0.01% 00:12:15.573 cpu : usr=5.89%, sys=20.83%, ctx=5105, majf=0, minf=108 00:12:15.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:15.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:15.573 issued rwts: total=57431,30202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:15.573 00:12:15.573 Run status group 0 (all jobs): 00:12:15.573 READ: bw=37.3MiB/s (39.2MB/s), 37.3MiB/s-37.3MiB/s (39.2MB/s-39.2MB/s), io=224MiB (235MB), run=6008-6008msec 00:12:15.573 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=118MiB (124MB), run=5267-5267msec 00:12:15.573 00:12:15.573 Disk stats (read/write): 00:12:15.573 nvme0n1: ios=56822/29405, merge=0/0, ticks=503319/214424, in_queue=717743, util=98.75% 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:12:15.573 23:12:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:15.832 rmmod nvme_tcp 00:12:15.832 rmmod nvme_fabrics 00:12:15.832 rmmod nvme_keyring 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67450 ']' 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67450 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67450 ']' 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67450 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:12:15.832 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67450 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.192 killing process with pid 67450 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67450' 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67450 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67450 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:16.192 00:12:16.192 real 0m19.430s 00:12:16.192 user 1m13.691s 00:12:16.192 sys 0m8.576s 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.192 ************************************ 00:12:16.192 END TEST nvmf_target_multipath 00:12:16.192 ************************************ 00:12:16.192 23:12:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:16.451 23:12:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:16.451 23:12:38 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:16.451 23:12:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.451 23:12:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.451 23:12:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:16.451 ************************************ 00:12:16.451 START TEST nvmf_zcopy 00:12:16.451 ************************************ 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:16.451 * Looking for test storage... 00:12:16.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.451 23:12:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:16.452 Cannot find device "nvmf_tgt_br" 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:16.452 Cannot find device "nvmf_tgt_br2" 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:16.452 Cannot find device "nvmf_tgt_br" 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:16.452 Cannot find device "nvmf_tgt_br2" 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:16.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:16.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:16.452 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:16.711 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:16.711 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:16.711 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:16.711 23:12:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:16.711 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:16.711 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:16.711 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:16.711 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:16.711 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:16.711 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:16.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:12:16.712 00:12:16.712 --- 10.0.0.2 ping statistics --- 00:12:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.712 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:16.712 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.712 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:16.712 00:12:16.712 --- 10.0.0.3 ping statistics --- 00:12:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.712 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:16.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:12:16.712 00:12:16.712 --- 10.0.0.1 ping statistics --- 00:12:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.712 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67910 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67910 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67910 ']' 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.712 23:12:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:16.970 [2024-07-24 23:12:39.250341] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:16.970 [2024-07-24 23:12:39.250440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.970 [2024-07-24 23:12:39.396404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.229 [2024-07-24 23:12:39.569429] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.229 [2024-07-24 23:12:39.569495] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.229 [2024-07-24 23:12:39.569510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.229 [2024-07-24 23:12:39.569520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.229 [2024-07-24 23:12:39.569530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.229 [2024-07-24 23:12:39.569565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.229 [2024-07-24 23:12:39.657488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:17.797 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.797 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:12:17.797 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.797 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.797 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 [2024-07-24 23:12:40.310784] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 [2024-07-24 23:12:40.326911] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 malloc0 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:18.055 { 00:12:18.055 "params": { 00:12:18.055 "name": "Nvme$subsystem", 00:12:18.055 "trtype": "$TEST_TRANSPORT", 00:12:18.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:18.055 "adrfam": "ipv4", 00:12:18.055 "trsvcid": "$NVMF_PORT", 00:12:18.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:18.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:18.055 "hdgst": ${hdgst:-false}, 00:12:18.055 "ddgst": ${ddgst:-false} 00:12:18.055 }, 00:12:18.055 "method": "bdev_nvme_attach_controller" 00:12:18.055 } 00:12:18.055 EOF 00:12:18.055 )") 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:18.055 23:12:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:18.055 "params": { 00:12:18.055 "name": "Nvme1", 00:12:18.055 "trtype": "tcp", 00:12:18.055 "traddr": "10.0.0.2", 00:12:18.055 "adrfam": "ipv4", 00:12:18.055 "trsvcid": "4420", 00:12:18.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:18.055 "hdgst": false, 00:12:18.055 "ddgst": false 00:12:18.056 }, 00:12:18.056 "method": "bdev_nvme_attach_controller" 00:12:18.056 }' 00:12:18.056 [2024-07-24 23:12:40.431021] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:18.056 [2024-07-24 23:12:40.431103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67951 ] 00:12:18.314 [2024-07-24 23:12:40.573213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.314 [2024-07-24 23:12:40.698768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.314 [2024-07-24 23:12:40.768290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:18.572 Running I/O for 10 seconds... 00:12:28.541 00:12:28.541 Latency(us) 00:12:28.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.541 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:28.541 Verification LBA range: start 0x0 length 0x1000 00:12:28.541 Nvme1n1 : 10.02 5778.03 45.14 0.00 0.00 22081.67 2174.60 32887.16 00:12:28.541 =================================================================================================================== 00:12:28.541 Total : 5778.03 45.14 0.00 0.00 22081.67 2174.60 32887.16 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68066 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:28.799 { 00:12:28.799 "params": { 00:12:28.799 "name": "Nvme$subsystem", 00:12:28.799 "trtype": "$TEST_TRANSPORT", 00:12:28.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:28.799 "adrfam": "ipv4", 00:12:28.799 "trsvcid": "$NVMF_PORT", 00:12:28.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:28.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:28.799 "hdgst": ${hdgst:-false}, 00:12:28.799 "ddgst": ${ddgst:-false} 00:12:28.799 }, 00:12:28.799 "method": "bdev_nvme_attach_controller" 00:12:28.799 } 00:12:28.799 EOF 00:12:28.799 )") 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:28.799 23:12:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:28.800 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:28.800 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:28.800 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:28.800 23:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:28.800 "params": { 00:12:28.800 "name": "Nvme1", 00:12:28.800 "trtype": "tcp", 00:12:28.800 "traddr": "10.0.0.2", 00:12:28.800 "adrfam": "ipv4", 00:12:28.800 "trsvcid": "4420", 00:12:28.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:28.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:28.800 "hdgst": false, 00:12:28.800 "ddgst": false 00:12:28.800 }, 00:12:28.800 "method": "bdev_nvme_attach_controller" 00:12:28.800 }' 00:12:28.800 [2024-07-24 23:12:51.158046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.158094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.165973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.165997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.177975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.178000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.189980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.190011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.201993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.202023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.213983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.214013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.214754] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:28.800 [2024-07-24 23:12:51.214881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68066 ] 00:12:28.800 [2024-07-24 23:12:51.225991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.226020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.237990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.238017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.249993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.250018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.261996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.262022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:28.800 [2024-07-24 23:12:51.273999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:28.800 [2024-07-24 23:12:51.274025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.286003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.286028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.298006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.298032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.310014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.310040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.322018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.322043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.334015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.334048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.346017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.346043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.356045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.058 [2024-07-24 23:12:51.358023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.358049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.370028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.370054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.382033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.382059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.394035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.394066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.406041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.406066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.418047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.418074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.430047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.430072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.058 [2024-07-24 23:12:51.442048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.058 [2024-07-24 23:12:51.442074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.454055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.454081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.466083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.466128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.478059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.478085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.480580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.059 [2024-07-24 23:12:51.490071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.490099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.502075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.502103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.514071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.514101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.526084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.526110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.538089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.059 [2024-07-24 23:12:51.538114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.059 [2024-07-24 23:12:51.542544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:29.317 [2024-07-24 23:12:51.550090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.550115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.562095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.562120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.574092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.574116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.586095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.586119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.598197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.598245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.610204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.610235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.622214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.622244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.634227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.634272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.646241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.646284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.658259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.658291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 Running I/O for 5 seconds... 00:12:29.317 [2024-07-24 23:12:51.673797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.673834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.689917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.689966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.705413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.705446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.721369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.721402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.740000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.740035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.755475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.755509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.772144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.772189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.317 [2024-07-24 23:12:51.788340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.317 [2024-07-24 23:12:51.788384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.806492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.806526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.821965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.821999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.839768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.839801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.854406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.854439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.869486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.869518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.884822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.884856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.903779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.903814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.919100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.919150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.935971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.936006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.953757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.953791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.968464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.968498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.984095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.984145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:51.993837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:51.993871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:52.010062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:52.010097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:52.026198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:52.026231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:52.036495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:52.036527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.576 [2024-07-24 23:12:52.052409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.576 [2024-07-24 23:12:52.052442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.067822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.067864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.083556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.083596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.101124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.101177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.116967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.117043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.127648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.127683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.142625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.142662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.158335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.158368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.173980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.174016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.183753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.183787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.834 [2024-07-24 23:12:52.200108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.834 [2024-07-24 23:12:52.200161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.216207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.216255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.226077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.226116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.241087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.241149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.256035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.256080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.271367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.271407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.281201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.281235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.296969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.297003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:29.835 [2024-07-24 23:12:52.313267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:29.835 [2024-07-24 23:12:52.313304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.329738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.329779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.348217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.348254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.363231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.363265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.373557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.373590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.388619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.388655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.405305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.405347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.423369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.423415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.438017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.438072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.454070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.454109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.472216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.472259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.487583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.487624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.504468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.504502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.521236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.521269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.537648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.537716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.554340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.554373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.093 [2024-07-24 23:12:52.572199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.093 [2024-07-24 23:12:52.572246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.588017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.588064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.597908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.597971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.612815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.612863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.629466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.629514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.649365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.649397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.664311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.664357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.683805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.683854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.699359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.699391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.716281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.716328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.732412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.732445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.748931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.748978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.768084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.768117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.351 [2024-07-24 23:12:52.783788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.351 [2024-07-24 23:12:52.783836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.352 [2024-07-24 23:12:52.799710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.352 [2024-07-24 23:12:52.799743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.352 [2024-07-24 23:12:52.817333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.352 [2024-07-24 23:12:52.817388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.352 [2024-07-24 23:12:52.833032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.352 [2024-07-24 23:12:52.833079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.850369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.850400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.866456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.866501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.877987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.878021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.893461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.893509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.908886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.908934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.924645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.924678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.940204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.940264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.951217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.951264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.967400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.967439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.981244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.981275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:52.998040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:52.998090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:53.013345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:53.013379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:53.029956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:53.030020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:53.044876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:53.044908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:53.060284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:53.060317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:53.074787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:53.074828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.609 [2024-07-24 23:12:53.090098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.609 [2024-07-24 23:12:53.090145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.105747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.105782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.115902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.115935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.130583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.130618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.146444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.146479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.156337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.156378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.172885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.172918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.189265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.189298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.204448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.204482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.214276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.214308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.226652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.226701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.242233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.242267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.258592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.258626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.276721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.276755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.291776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.291810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.302013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.302046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.318301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.318333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:30.868 [2024-07-24 23:12:53.334386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:30.868 [2024-07-24 23:12:53.334419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.125 [2024-07-24 23:12:53.352560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.125 [2024-07-24 23:12:53.352595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.125 [2024-07-24 23:12:53.367952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.125 [2024-07-24 23:12:53.367989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.125 [2024-07-24 23:12:53.378399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.378432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.391183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.391216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.406877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.406911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.422791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.422825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.432376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.432408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.446946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.446995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.461593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.461640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.476867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.476914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.492326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.492396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.502213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.502257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.516198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.516243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.533756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.533789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.549343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.549375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.565912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.565960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.582638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.582685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.126 [2024-07-24 23:12:53.599542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.126 [2024-07-24 23:12:53.599574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.383 [2024-07-24 23:12:53.615859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.383 [2024-07-24 23:12:53.615891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.383 [2024-07-24 23:12:53.631976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.383 [2024-07-24 23:12:53.632023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.649066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.649096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.665025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.665073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.682224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.682299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.698352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.698381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.716429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.716460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.730380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.730414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.746587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.746634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.765442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.765488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.779978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.780025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.795917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.795963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.815611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.815644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.830573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.830606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.841084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.841132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.384 [2024-07-24 23:12:53.853692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.384 [2024-07-24 23:12:53.853727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.869092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.869139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.884774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.884807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.900449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.900492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.910658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.910692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.925690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.925724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.937090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.937152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.952545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.952579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.967872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.967921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:53.984209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:53.984261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.001934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.001968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.017878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.017943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.028043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.028088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.045016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.045062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.060662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.060696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.070410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.070443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.085343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.642 [2024-07-24 23:12:54.085388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.642 [2024-07-24 23:12:54.100446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.643 [2024-07-24 23:12:54.100477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.643 [2024-07-24 23:12:54.115899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.643 [2024-07-24 23:12:54.115962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.133016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.133061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.149522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.149553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.165958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.166003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.181420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.181450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.197096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.197152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.214787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.214832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.229765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.229809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.246893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.246954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.261736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.261816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.277823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.277871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.294929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.294980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.311492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.311538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.329221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.329264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.345377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.345421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.362618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.362665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:31.901 [2024-07-24 23:12:54.378712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:31.901 [2024-07-24 23:12:54.378758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.395127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.395199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.413777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.413821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.427189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.427233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.444909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.444956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.459867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.459930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.469374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.469405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.486173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.486248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.501398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.501446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.511408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.160 [2024-07-24 23:12:54.511446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.160 [2024-07-24 23:12:54.528850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.528911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.544057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.544090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.559710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.559743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.578064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.578100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.593214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.593246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.611098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.611156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.627008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.627041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.161 [2024-07-24 23:12:54.643830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.161 [2024-07-24 23:12:54.643866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.660583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.660615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.676014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.676047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.685953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.685986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.701085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.701135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.718431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.718479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.733995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.734029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.750984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.751031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.766825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.766857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.781665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.781712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.797086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.797132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.806454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.806487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.823318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.823365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.840777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.840825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.856546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.856577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.866545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.866591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.881933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.881980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.420 [2024-07-24 23:12:54.897969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.420 [2024-07-24 23:12:54.898003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.678 [2024-07-24 23:12:54.913843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.678 [2024-07-24 23:12:54.913890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:54.932334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:54.932374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:54.947162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:54.947238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:54.963359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:54.963406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:54.981637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:54.981678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:54.997227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:54.997272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.015145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.015202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.030564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.030610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.041937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.041982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.058024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.058070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.072941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.072989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.088548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.088596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.106577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.106624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.122456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.122503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.140038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.140084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.679 [2024-07-24 23:12:55.155503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.679 [2024-07-24 23:12:55.155535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.171721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.171770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.181199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.181272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.196647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.196679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.214826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.214906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.229817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.229864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.245416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.245463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.262992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.263039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.277739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.277771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.293689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.293736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.309628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.309677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.328793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.328826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.344357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.344413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.361762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.361810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.376909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.376956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.386597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.386644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.402283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.937 [2024-07-24 23:12:55.402330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.937 [2024-07-24 23:12:55.419042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:32.938 [2024-07-24 23:12:55.419092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.434274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.434320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.451738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.451785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.466446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.466494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.481985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.482034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.499907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.499953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.515060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.515108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.533239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.533286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.548208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.548271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.557694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.557726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.573749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.573797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.588375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.588406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.604891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.604924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.621158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.621234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.640201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.640243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.655382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.655430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.196 [2024-07-24 23:12:55.672600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.196 [2024-07-24 23:12:55.672633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.687255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.687301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.702806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.702853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.719969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.720017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.736856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.736918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.753359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.753391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.769418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.769450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.787827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.787864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.802264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.802296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.817498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.817546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.827373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.827422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.843267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.843314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.859420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.859467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.876976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.877023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.891874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.891923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.901338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.901383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.917508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.917540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.455 [2024-07-24 23:12:55.932323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.455 [2024-07-24 23:12:55.932353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:55.948338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:55.948409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:55.965598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:55.965630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:55.981509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:55.981556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:55.998335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:55.998380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.014064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.014110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.033262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.033307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.047710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.047773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.057939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.057985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.073546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.073579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.091019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.091065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.107662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.107708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.126312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.126358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.140435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.140467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.155782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.155829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.170445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.170493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.714 [2024-07-24 23:12:56.186387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.714 [2024-07-24 23:12:56.186420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.203002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.203049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.220655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.220690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.235672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.235719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.245608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.245654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.260072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.260120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.269784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.269831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.285656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.285686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.299615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.299660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.314623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.314684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.324043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.324090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.340279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.340325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.355880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.355915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.374159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.374220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.388762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.388808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.404553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.404586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.423321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.423353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.438448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.438479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.973 [2024-07-24 23:12:56.457133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:33.973 [2024-07-24 23:12:56.457208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.472406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.472438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.491032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.491080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.506025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.506073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.523958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.523990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.539119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.539167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.557115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.557177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.571825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.571875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.587034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.587082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.596789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.596839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.613279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.613313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.628386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.628419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.646123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.646199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.661184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.661244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.670668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.670716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 00:12:34.232 Latency(us) 00:12:34.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.232 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:34.232 Nvme1n1 : 5.01 11283.35 88.15 0.00 0.00 11330.74 4736.47 19899.11 00:12:34.232 =================================================================================================================== 00:12:34.232 Total : 11283.35 88.15 0.00 0.00 11330.74 4736.47 19899.11 00:12:34.232 [2024-07-24 23:12:56.681549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.681596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.693534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.693577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.232 [2024-07-24 23:12:56.705554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.232 [2024-07-24 23:12:56.705605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.717559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.717613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.729558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.729597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.741569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.741658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.753562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.753627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.765565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.765601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.777585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.777625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.789578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.789679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.801587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.801668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.813621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.813687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.825578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.825648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.837602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.491 [2024-07-24 23:12:56.837639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.491 [2024-07-24 23:12:56.849587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.849654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.861604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.861655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.873643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.873699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.885597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.885641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.897567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.897611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.909598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.909642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.921616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.921667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.933629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.933678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.945586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.945628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.957605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.957630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.492 [2024-07-24 23:12:56.969640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.492 [2024-07-24 23:12:56.969686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.751 [2024-07-24 23:12:56.981637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.751 [2024-07-24 23:12:56.981685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.751 [2024-07-24 23:12:56.993665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.751 [2024-07-24 23:12:56.993711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.751 [2024-07-24 23:12:57.005645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.751 [2024-07-24 23:12:57.005687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.751 [2024-07-24 23:12:57.017649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:34.751 [2024-07-24 23:12:57.017688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:34.751 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68066) - No such process 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68066 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:34.751 delay0 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.751 23:12:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:34.751 [2024-07-24 23:12:57.222897] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:42.864 Initializing NVMe Controllers 00:12:42.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:42.864 Initialization complete. Launching workers. 00:12:42.864 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 252, failed: 22509 00:12:42.864 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22648, failed to submit 113 00:12:42.864 success 22536, unsuccess 112, failed 0 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.864 rmmod nvme_tcp 00:12:42.864 rmmod nvme_fabrics 00:12:42.864 rmmod nvme_keyring 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67910 ']' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67910 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67910 ']' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67910 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67910 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:42.864 killing process with pid 67910 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67910' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67910 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67910 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:42.864 00:12:42.864 real 0m26.095s 00:12:42.864 user 0m40.266s 00:12:42.864 sys 0m9.039s 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.864 23:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:42.864 ************************************ 00:12:42.864 END TEST nvmf_zcopy 00:12:42.864 ************************************ 00:12:42.864 23:13:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:42.864 23:13:04 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:42.864 23:13:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:42.864 23:13:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.864 23:13:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.864 ************************************ 00:12:42.864 START TEST nvmf_nmic 00:12:42.864 ************************************ 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:42.864 * Looking for test storage... 00:12:42.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.864 23:13:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:42.865 Cannot find device "nvmf_tgt_br" 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:42.865 Cannot find device "nvmf_tgt_br2" 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:42.865 23:13:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:42.865 Cannot find device "nvmf_tgt_br" 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:42.865 Cannot find device "nvmf_tgt_br2" 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:42.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:42.865 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:42.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:42.865 00:12:42.865 --- 10.0.0.2 ping statistics --- 00:12:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.865 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:42.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:42.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:42.865 00:12:42.865 --- 10.0.0.3 ping statistics --- 00:12:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.865 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:42.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:42.865 00:12:42.865 --- 10.0.0.1 ping statistics --- 00:12:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.865 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68398 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68398 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68398 ']' 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.865 23:13:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.148 [2024-07-24 23:13:05.367672] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:43.148 [2024-07-24 23:13:05.367799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.148 [2024-07-24 23:13:05.509063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:43.406 [2024-07-24 23:13:05.670860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.406 [2024-07-24 23:13:05.670944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.406 [2024-07-24 23:13:05.670963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.406 [2024-07-24 23:13:05.670978] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.406 [2024-07-24 23:13:05.670989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.406 [2024-07-24 23:13:05.671506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.406 [2024-07-24 23:13:05.671605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.406 [2024-07-24 23:13:05.671715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.406 [2024-07-24 23:13:05.671724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.406 [2024-07-24 23:13:05.776764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:43.972 [2024-07-24 23:13:06.430541] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.972 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 Malloc0 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 [2024-07-24 23:13:06.504550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 test case1: single bdev can't be used in multiple subsystems 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 [2024-07-24 23:13:06.528305] bdev.c:8075:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:44.230 [2024-07-24 23:13:06.528340] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:44.230 [2024-07-24 23:13:06.528352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.230 request: 00:12:44.230 { 00:12:44.230 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:44.230 "namespace": { 00:12:44.230 "bdev_name": "Malloc0", 00:12:44.230 "no_auto_visible": false 00:12:44.230 }, 00:12:44.230 "method": "nvmf_subsystem_add_ns", 00:12:44.230 "req_id": 1 00:12:44.230 } 00:12:44.230 Got JSON-RPC error response 00:12:44.230 response: 00:12:44.230 { 00:12:44.230 "code": -32602, 00:12:44.230 "message": "Invalid parameters" 00:12:44.230 } 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:44.230 Adding namespace failed - expected result. 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:44.230 test case2: host connect to nvmf target in multiple paths 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:44.230 [2024-07-24 23:13:06.540435] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.230 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:44.488 23:13:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.488 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.488 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.488 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.488 23:13:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:46.386 23:13:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:46.386 [global] 00:12:46.386 thread=1 00:12:46.386 invalidate=1 00:12:46.386 rw=write 00:12:46.386 time_based=1 00:12:46.386 runtime=1 00:12:46.386 ioengine=libaio 00:12:46.386 direct=1 00:12:46.386 bs=4096 00:12:46.386 iodepth=1 00:12:46.386 norandommap=0 00:12:46.386 numjobs=1 00:12:46.386 00:12:46.386 verify_dump=1 00:12:46.386 verify_backlog=512 00:12:46.386 verify_state_save=0 00:12:46.386 do_verify=1 00:12:46.386 verify=crc32c-intel 00:12:46.386 [job0] 00:12:46.386 filename=/dev/nvme0n1 00:12:46.644 Could not set queue depth (nvme0n1) 00:12:46.644 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:46.644 fio-3.35 00:12:46.644 Starting 1 thread 00:12:47.634 00:12:47.634 job0: (groupid=0, jobs=1): err= 0: pid=68495: Wed Jul 24 23:13:10 2024 00:12:47.634 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:12:47.634 slat (nsec): min=13401, max=73326, avg=18423.09, stdev=6213.78 00:12:47.634 clat (usec): min=152, max=674, avg=215.21, stdev=32.67 00:12:47.634 lat (usec): min=169, max=689, avg=233.63, stdev=34.27 00:12:47.634 clat percentiles (usec): 00:12:47.634 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:12:47.634 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:12:47.634 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 265], 00:12:47.634 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 469], 99.95th=[ 510], 00:12:47.634 | 99.99th=[ 676] 00:12:47.634 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:47.634 slat (usec): min=19, max=100, avg=26.51, stdev= 8.38 00:12:47.634 clat (usec): min=91, max=533, avg=130.27, stdev=25.37 00:12:47.634 lat (usec): min=113, max=568, avg=156.78, stdev=28.40 00:12:47.634 clat percentiles (usec): 00:12:47.634 | 1.00th=[ 97], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 112], 00:12:47.634 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 133], 00:12:47.634 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 161], 95.00th=[ 174], 00:12:47.634 | 99.00th=[ 206], 99.50th=[ 227], 99.90th=[ 396], 99.95th=[ 416], 00:12:47.634 | 99.99th=[ 537] 00:12:47.634 bw ( KiB/s): min=11944, max=11944, per=100.00%, avg=11944.00, stdev= 0.00, samples=1 00:12:47.634 iops : min= 2986, max= 2986, avg=2986.00, stdev= 0.00, samples=1 00:12:47.634 lat (usec) : 100=1.89%, 250=92.78%, 500=5.28%, 750=0.06% 00:12:47.634 cpu : usr=1.70%, sys=9.20%, ctx=5080, majf=0, minf=2 00:12:47.634 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:47.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.634 issued rwts: total=2520,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.634 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:47.634 00:12:47.634 Run status group 0 (all jobs): 00:12:47.634 READ: bw=9.83MiB/s (10.3MB/s), 9.83MiB/s-9.83MiB/s (10.3MB/s-10.3MB/s), io=9.84MiB (10.3MB), run=1001-1001msec 00:12:47.634 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:47.634 00:12:47.634 Disk stats (read/write): 00:12:47.634 nvme0n1: ios=2130/2560, merge=0/0, ticks=497/367, in_queue=864, util=91.38% 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.893 rmmod nvme_tcp 00:12:47.893 rmmod nvme_fabrics 00:12:47.893 rmmod nvme_keyring 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68398 ']' 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68398 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68398 ']' 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68398 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68398 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:47.893 killing process with pid 68398 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68398' 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68398 00:12:47.893 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68398 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:48.459 00:12:48.459 real 0m5.932s 00:12:48.459 user 0m18.785s 00:12:48.459 sys 0m2.206s 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.459 ************************************ 00:12:48.459 END TEST nvmf_nmic 00:12:48.459 23:13:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:48.459 ************************************ 00:12:48.459 23:13:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:48.459 23:13:10 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:48.459 23:13:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:48.459 23:13:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.459 23:13:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:48.459 ************************************ 00:12:48.459 START TEST nvmf_fio_target 00:12:48.459 ************************************ 00:12:48.459 23:13:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:48.459 * Looking for test storage... 00:12:48.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.459 23:13:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.460 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:48.718 Cannot find device "nvmf_tgt_br" 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.718 Cannot find device "nvmf_tgt_br2" 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:48.718 23:13:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:48.718 Cannot find device "nvmf_tgt_br" 00:12:48.718 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:12:48.718 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:48.718 Cannot find device "nvmf_tgt_br2" 00:12:48.718 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:48.719 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:48.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:48.977 00:12:48.977 --- 10.0.0.2 ping statistics --- 00:12:48.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.977 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:48.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:48.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:12:48.977 00:12:48.977 --- 10.0.0.3 ping statistics --- 00:12:48.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.977 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:48.977 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:48.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:48.977 00:12:48.977 --- 10.0.0.1 ping statistics --- 00:12:48.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.978 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68672 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68672 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68672 ']' 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.978 23:13:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.978 [2024-07-24 23:13:11.385306] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:12:48.978 [2024-07-24 23:13:11.386388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.260 [2024-07-24 23:13:11.526749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.260 [2024-07-24 23:13:11.658584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.260 [2024-07-24 23:13:11.658661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.260 [2024-07-24 23:13:11.658673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.260 [2024-07-24 23:13:11.658681] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.260 [2024-07-24 23:13:11.658688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.260 [2024-07-24 23:13:11.658827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.260 [2024-07-24 23:13:11.659492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.260 [2024-07-24 23:13:11.659732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.260 [2024-07-24 23:13:11.659739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.260 [2024-07-24 23:13:11.739435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.196 23:13:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:50.196 [2024-07-24 23:13:12.668711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.455 23:13:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.713 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:50.713 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:50.971 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:50.971 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.228 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:51.228 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:51.486 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:51.486 23:13:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:51.745 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.003 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:52.003 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.262 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:52.262 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.520 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:52.520 23:13:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:52.777 23:13:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:53.035 23:13:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:53.035 23:13:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.293 23:13:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:53.293 23:13:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.552 23:13:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.811 [2024-07-24 23:13:16.154578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.811 23:13:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:54.068 23:13:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:54.325 23:13:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.583 23:13:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:54.583 23:13:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:54.583 23:13:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.583 23:13:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:54.583 23:13:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:54.583 23:13:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:56.480 23:13:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:56.480 [global] 00:12:56.480 thread=1 00:12:56.480 invalidate=1 00:12:56.480 rw=write 00:12:56.480 time_based=1 00:12:56.480 runtime=1 00:12:56.480 ioengine=libaio 00:12:56.480 direct=1 00:12:56.480 bs=4096 00:12:56.480 iodepth=1 00:12:56.480 norandommap=0 00:12:56.480 numjobs=1 00:12:56.480 00:12:56.480 verify_dump=1 00:12:56.480 verify_backlog=512 00:12:56.480 verify_state_save=0 00:12:56.480 do_verify=1 00:12:56.480 verify=crc32c-intel 00:12:56.480 [job0] 00:12:56.480 filename=/dev/nvme0n1 00:12:56.480 [job1] 00:12:56.480 filename=/dev/nvme0n2 00:12:56.480 [job2] 00:12:56.480 filename=/dev/nvme0n3 00:12:56.480 [job3] 00:12:56.480 filename=/dev/nvme0n4 00:12:56.480 Could not set queue depth (nvme0n1) 00:12:56.480 Could not set queue depth (nvme0n2) 00:12:56.480 Could not set queue depth (nvme0n3) 00:12:56.480 Could not set queue depth (nvme0n4) 00:12:56.738 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.738 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.738 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.738 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:56.738 fio-3.35 00:12:56.738 Starting 4 threads 00:12:58.112 00:12:58.112 job0: (groupid=0, jobs=1): err= 0: pid=68856: Wed Jul 24 23:13:20 2024 00:12:58.112 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:58.112 slat (nsec): min=11100, max=52448, avg=17838.43, stdev=5223.10 00:12:58.112 clat (usec): min=216, max=505, avg=298.78, stdev=57.33 00:12:58.112 lat (usec): min=234, max=517, avg=316.62, stdev=55.74 00:12:58.112 clat percentiles (usec): 00:12:58.112 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:12:58.112 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 289], 00:12:58.112 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 392], 00:12:58.112 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 486], 99.95th=[ 506], 00:12:58.112 | 99.99th=[ 506] 00:12:58.112 write: IOPS=1826, BW=7305KiB/s (7480kB/s)(7312KiB/1001msec); 0 zone resets 00:12:58.112 slat (usec): min=13, max=143, avg=25.33, stdev= 7.89 00:12:58.112 clat (usec): min=105, max=503, avg=251.78, stdev=56.07 00:12:58.112 lat (usec): min=163, max=544, avg=277.11, stdev=57.20 00:12:58.112 clat percentiles (usec): 00:12:58.112 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 198], 00:12:58.112 | 30.00th=[ 206], 40.00th=[ 219], 50.00th=[ 260], 60.00th=[ 277], 00:12:58.112 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 330], 00:12:58.112 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 482], 99.95th=[ 502], 00:12:58.112 | 99.99th=[ 502] 00:12:58.112 bw ( KiB/s): min= 7520, max= 7520, per=23.71%, avg=7520.00, stdev= 0.00, samples=1 00:12:58.112 iops : min= 1880, max= 1880, avg=1880.00, stdev= 0.00, samples=1 00:12:58.112 lat (usec) : 250=36.27%, 500=63.67%, 750=0.06% 00:12:58.112 cpu : usr=1.70%, sys=6.50%, ctx=3364, majf=0, minf=1 00:12:58.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.112 issued rwts: total=1536,1828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.112 job1: (groupid=0, jobs=1): err= 0: pid=68857: Wed Jul 24 23:13:20 2024 00:12:58.112 read: IOPS=2357, BW=9431KiB/s (9657kB/s)(9440KiB/1001msec) 00:12:58.112 slat (nsec): min=9179, max=45567, avg=16141.12, stdev=3550.29 00:12:58.112 clat (usec): min=146, max=741, avg=223.09, stdev=67.62 00:12:58.112 lat (usec): min=161, max=753, avg=239.23, stdev=66.86 00:12:58.112 clat percentiles (usec): 00:12:58.112 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:12:58.112 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 204], 00:12:58.112 | 70.00th=[ 255], 80.00th=[ 273], 90.00th=[ 338], 95.00th=[ 359], 00:12:58.112 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 570], 99.95th=[ 717], 00:12:58.112 | 99.99th=[ 742] 00:12:58.112 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:58.112 slat (usec): min=12, max=145, avg=23.87, stdev= 6.24 00:12:58.112 clat (usec): min=95, max=367, avg=142.53, stdev=26.11 00:12:58.112 lat (usec): min=117, max=446, avg=166.40, stdev=26.76 00:12:58.112 clat percentiles (usec): 00:12:58.112 | 1.00th=[ 102], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 00:12:58.112 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 145], 00:12:58.112 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 172], 95.00th=[ 186], 00:12:58.112 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 297], 99.95th=[ 302], 00:12:58.112 | 99.99th=[ 367] 00:12:58.112 bw ( KiB/s): min=12288, max=12288, per=38.74%, avg=12288.00, stdev= 0.00, samples=1 00:12:58.112 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:58.113 lat (usec) : 100=0.14%, 250=84.37%, 500=15.41%, 750=0.08% 00:12:58.113 cpu : usr=2.20%, sys=7.80%, ctx=4920, majf=0, minf=16 00:12:58.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.113 issued rwts: total=2360,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.113 job2: (groupid=0, jobs=1): err= 0: pid=68858: Wed Jul 24 23:13:20 2024 00:12:58.113 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:58.113 slat (nsec): min=9433, max=72560, avg=20053.05, stdev=6996.53 00:12:58.113 clat (usec): min=196, max=8036, avg=342.03, stdev=327.04 00:12:58.113 lat (usec): min=215, max=8060, avg=362.08, stdev=328.23 00:12:58.113 clat percentiles (usec): 00:12:58.113 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 265], 00:12:58.113 | 30.00th=[ 281], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:12:58.113 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 392], 00:12:58.113 | 99.00th=[ 701], 99.50th=[ 799], 99.90th=[ 7308], 99.95th=[ 8029], 00:12:58.113 | 99.99th=[ 8029] 00:12:58.113 write: IOPS=1719, BW=6877KiB/s (7042kB/s)(6884KiB/1001msec); 0 zone resets 00:12:58.113 slat (usec): min=13, max=141, avg=29.85, stdev= 9.54 00:12:58.113 clat (usec): min=116, max=1124, avg=223.40, stdev=63.22 00:12:58.113 lat (usec): min=141, max=1173, avg=253.25, stdev=70.16 00:12:58.113 clat percentiles (usec): 00:12:58.113 | 1.00th=[ 129], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:12:58.113 | 30.00th=[ 167], 40.00th=[ 180], 50.00th=[ 247], 60.00th=[ 262], 00:12:58.113 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:12:58.113 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 392], 99.95th=[ 1123], 00:12:58.113 | 99.99th=[ 1123] 00:12:58.113 bw ( KiB/s): min= 7928, max= 7928, per=25.00%, avg=7928.00, stdev= 0.00, samples=1 00:12:58.113 iops : min= 1982, max= 1982, avg=1982.00, stdev= 0.00, samples=1 00:12:58.113 lat (usec) : 250=30.92%, 500=67.79%, 750=0.98%, 1000=0.09% 00:12:58.113 lat (msec) : 2=0.03%, 4=0.09%, 10=0.09% 00:12:58.113 cpu : usr=2.00%, sys=6.40%, ctx=3257, majf=0, minf=9 00:12:58.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.113 issued rwts: total=1536,1721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.113 job3: (groupid=0, jobs=1): err= 0: pid=68859: Wed Jul 24 23:13:20 2024 00:12:58.113 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:58.113 slat (nsec): min=10938, max=53893, avg=16866.76, stdev=5910.13 00:12:58.113 clat (usec): min=198, max=629, avg=300.12, stdev=52.57 00:12:58.113 lat (usec): min=215, max=649, avg=316.99, stdev=55.96 00:12:58.113 clat percentiles (usec): 00:12:58.113 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 255], 00:12:58.113 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 297], 00:12:58.113 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 379], 00:12:58.113 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 523], 99.95th=[ 627], 00:12:58.113 | 99.99th=[ 627] 00:12:58.113 write: IOPS=1826, BW=7305KiB/s (7480kB/s)(7312KiB/1001msec); 0 zone resets 00:12:58.113 slat (nsec): min=14150, max=88781, avg=27095.35, stdev=8062.98 00:12:58.113 clat (usec): min=111, max=583, avg=249.69, stdev=51.97 00:12:58.113 lat (usec): min=151, max=601, avg=276.78, stdev=56.82 00:12:58.113 clat percentiles (usec): 00:12:58.113 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:12:58.113 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 255], 60.00th=[ 269], 00:12:58.113 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 318], 00:12:58.113 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 478], 99.95th=[ 586], 00:12:58.113 | 99.99th=[ 586] 00:12:58.113 bw ( KiB/s): min= 7519, max= 7519, per=23.71%, avg=7519.00, stdev= 0.00, samples=1 00:12:58.113 iops : min= 1879, max= 1879, avg=1879.00, stdev= 0.00, samples=1 00:12:58.113 lat (usec) : 250=33.23%, 500=66.68%, 750=0.09% 00:12:58.113 cpu : usr=1.70%, sys=6.40%, ctx=3365, majf=0, minf=9 00:12:58.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.113 issued rwts: total=1536,1828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.113 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.113 00:12:58.113 Run status group 0 (all jobs): 00:12:58.113 READ: bw=27.2MiB/s (28.5MB/s), 6138KiB/s-9431KiB/s (6285kB/s-9657kB/s), io=27.2MiB (28.5MB), run=1001-1001msec 00:12:58.113 WRITE: bw=31.0MiB/s (32.5MB/s), 6877KiB/s-9.99MiB/s (7042kB/s-10.5MB/s), io=31.0MiB (32.5MB), run=1001-1001msec 00:12:58.113 00:12:58.113 Disk stats (read/write): 00:12:58.113 nvme0n1: ios=1285/1536, merge=0/0, ticks=425/364, in_queue=789, util=88.05% 00:12:58.113 nvme0n2: ios=2048/2265, merge=0/0, ticks=438/333, in_queue=771, util=87.19% 00:12:58.113 nvme0n3: ios=1172/1536, merge=0/0, ticks=410/368, in_queue=778, util=88.27% 00:12:58.113 nvme0n4: ios=1236/1536, merge=0/0, ticks=366/397, in_queue=763, util=89.67% 00:12:58.113 23:13:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:58.113 [global] 00:12:58.113 thread=1 00:12:58.113 invalidate=1 00:12:58.113 rw=randwrite 00:12:58.113 time_based=1 00:12:58.113 runtime=1 00:12:58.113 ioengine=libaio 00:12:58.113 direct=1 00:12:58.113 bs=4096 00:12:58.113 iodepth=1 00:12:58.113 norandommap=0 00:12:58.113 numjobs=1 00:12:58.113 00:12:58.113 verify_dump=1 00:12:58.113 verify_backlog=512 00:12:58.113 verify_state_save=0 00:12:58.113 do_verify=1 00:12:58.113 verify=crc32c-intel 00:12:58.113 [job0] 00:12:58.113 filename=/dev/nvme0n1 00:12:58.113 [job1] 00:12:58.113 filename=/dev/nvme0n2 00:12:58.113 [job2] 00:12:58.113 filename=/dev/nvme0n3 00:12:58.113 [job3] 00:12:58.113 filename=/dev/nvme0n4 00:12:58.113 Could not set queue depth (nvme0n1) 00:12:58.113 Could not set queue depth (nvme0n2) 00:12:58.113 Could not set queue depth (nvme0n3) 00:12:58.113 Could not set queue depth (nvme0n4) 00:12:58.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:58.113 fio-3.35 00:12:58.113 Starting 4 threads 00:12:59.488 00:12:59.488 job0: (groupid=0, jobs=1): err= 0: pid=68918: Wed Jul 24 23:13:21 2024 00:12:59.488 read: IOPS=2885, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:12:59.488 slat (usec): min=10, max=135, avg=14.08, stdev= 5.07 00:12:59.488 clat (usec): min=82, max=7990, avg=171.83, stdev=146.62 00:12:59.488 lat (usec): min=148, max=8002, avg=185.91, stdev=146.63 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:12:59.488 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:12:59.488 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 198], 00:12:59.488 | 99.00th=[ 219], 99.50th=[ 241], 99.90th=[ 400], 99.95th=[ 478], 00:12:59.488 | 99.99th=[ 7963] 00:12:59.488 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:12:59.488 slat (usec): min=13, max=137, avg=20.37, stdev= 4.81 00:12:59.488 clat (usec): min=93, max=249, avg=126.80, stdev=13.30 00:12:59.488 lat (usec): min=112, max=387, avg=147.17, stdev=14.07 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:12:59.488 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:12:59.488 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 151], 00:12:59.488 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 194], 99.95th=[ 208], 00:12:59.488 | 99.99th=[ 249] 00:12:59.488 bw ( KiB/s): min=12288, max=12288, per=31.95%, avg=12288.00, stdev= 0.00, samples=1 00:12:59.488 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:59.488 lat (usec) : 100=0.27%, 250=99.60%, 500=0.12% 00:12:59.488 lat (msec) : 10=0.02% 00:12:59.488 cpu : usr=2.30%, sys=8.40%, ctx=5973, majf=0, minf=11 00:12:59.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 issued rwts: total=2888,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.488 job1: (groupid=0, jobs=1): err= 0: pid=68919: Wed Jul 24 23:13:21 2024 00:12:59.488 read: IOPS=1563, BW=6254KiB/s (6404kB/s)(6260KiB/1001msec) 00:12:59.488 slat (nsec): min=12380, max=76283, avg=17605.46, stdev=5718.89 00:12:59.488 clat (usec): min=150, max=1513, avg=302.42, stdev=84.40 00:12:59.488 lat (usec): min=167, max=1533, avg=320.03, stdev=87.62 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 163], 5.00th=[ 235], 10.00th=[ 258], 20.00th=[ 269], 00:12:59.488 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:12:59.488 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 375], 95.00th=[ 404], 00:12:59.488 | 99.00th=[ 734], 99.50th=[ 766], 99.90th=[ 824], 99.95th=[ 1516], 00:12:59.488 | 99.99th=[ 1516] 00:12:59.488 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:59.488 slat (usec): min=18, max=144, avg=26.29, stdev= 8.71 00:12:59.488 clat (usec): min=100, max=740, avg=213.50, stdev=90.28 00:12:59.488 lat (usec): min=123, max=780, avg=239.79, stdev=96.59 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 111], 5.00th=[ 120], 10.00th=[ 127], 20.00th=[ 139], 00:12:59.488 | 30.00th=[ 180], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 206], 00:12:59.488 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 388], 95.00th=[ 420], 00:12:59.488 | 99.00th=[ 474], 99.50th=[ 627], 99.90th=[ 668], 99.95th=[ 676], 00:12:59.488 | 99.99th=[ 742] 00:12:59.488 bw ( KiB/s): min= 8192, max= 8192, per=21.30%, avg=8192.00, stdev= 0.00, samples=1 00:12:59.488 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:59.488 lat (usec) : 250=51.23%, 500=47.25%, 750=1.27%, 1000=0.22% 00:12:59.488 lat (msec) : 2=0.03% 00:12:59.488 cpu : usr=1.80%, sys=6.40%, ctx=3616, majf=0, minf=7 00:12:59.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 issued rwts: total=1565,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.488 job2: (groupid=0, jobs=1): err= 0: pid=68920: Wed Jul 24 23:13:21 2024 00:12:59.488 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:59.488 slat (nsec): min=12850, max=44585, avg=16896.22, stdev=4963.45 00:12:59.488 clat (usec): min=161, max=1597, avg=195.28, stdev=37.53 00:12:59.488 lat (usec): min=174, max=1620, avg=212.18, stdev=38.97 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:12:59.488 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:12:59.488 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 229], 00:12:59.488 | 99.00th=[ 281], 99.50th=[ 314], 99.90th=[ 660], 99.95th=[ 676], 00:12:59.488 | 99.99th=[ 1598] 00:12:59.488 write: IOPS=2702, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:12:59.488 slat (usec): min=15, max=183, avg=22.39, stdev= 5.88 00:12:59.488 clat (usec): min=109, max=1722, avg=142.89, stdev=36.75 00:12:59.488 lat (usec): min=130, max=1741, avg=165.28, stdev=37.77 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 116], 5.00th=[ 123], 10.00th=[ 127], 20.00th=[ 131], 00:12:59.488 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:12:59.488 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 167], 00:12:59.488 | 99.00th=[ 196], 99.50th=[ 219], 99.90th=[ 449], 99.95th=[ 701], 00:12:59.488 | 99.99th=[ 1729] 00:12:59.488 bw ( KiB/s): min=12288, max=12288, per=31.95%, avg=12288.00, stdev= 0.00, samples=1 00:12:59.488 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:59.488 lat (usec) : 250=98.84%, 500=1.06%, 750=0.06% 00:12:59.488 lat (msec) : 2=0.04% 00:12:59.488 cpu : usr=1.50%, sys=8.80%, ctx=5265, majf=0, minf=17 00:12:59.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 issued rwts: total=2560,2705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.488 job3: (groupid=0, jobs=1): err= 0: pid=68921: Wed Jul 24 23:13:21 2024 00:12:59.488 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:59.488 slat (nsec): min=12243, max=66568, avg=17240.30, stdev=6479.76 00:12:59.488 clat (usec): min=189, max=7962, avg=325.54, stdev=238.21 00:12:59.488 lat (usec): min=205, max=7977, avg=342.78, stdev=239.66 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 233], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 273], 00:12:59.488 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:12:59.488 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 461], 95.00th=[ 494], 00:12:59.488 | 99.00th=[ 742], 99.50th=[ 930], 99.90th=[ 3654], 99.95th=[ 7963], 00:12:59.488 | 99.99th=[ 7963] 00:12:59.488 write: IOPS=1798, BW=7193KiB/s (7365kB/s)(7200KiB/1001msec); 0 zone resets 00:12:59.488 slat (usec): min=17, max=155, avg=25.80, stdev=10.28 00:12:59.488 clat (usec): min=107, max=6621, avg=233.59, stdev=202.90 00:12:59.488 lat (usec): min=128, max=6650, avg=259.39, stdev=206.33 00:12:59.488 clat percentiles (usec): 00:12:59.488 | 1.00th=[ 119], 5.00th=[ 131], 10.00th=[ 143], 20.00th=[ 186], 00:12:59.488 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:12:59.488 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 400], 95.00th=[ 429], 00:12:59.488 | 99.00th=[ 465], 99.50th=[ 486], 99.90th=[ 3490], 99.95th=[ 6652], 00:12:59.488 | 99.99th=[ 6652] 00:12:59.488 bw ( KiB/s): min= 6200, max= 8192, per=18.71%, avg=7196.00, stdev=1408.56, samples=2 00:12:59.488 iops : min= 1550, max= 2048, avg=1799.00, stdev=352.14, samples=2 00:12:59.488 lat (usec) : 250=46.40%, 500=51.62%, 750=1.41%, 1000=0.33% 00:12:59.488 lat (msec) : 2=0.06%, 4=0.12%, 10=0.06% 00:12:59.488 cpu : usr=1.50%, sys=5.90%, ctx=3336, majf=0, minf=10 00:12:59.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.488 issued rwts: total=1536,1800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.488 00:12:59.488 Run status group 0 (all jobs): 00:12:59.488 READ: bw=33.4MiB/s (35.0MB/s), 6138KiB/s-11.3MiB/s (6285kB/s-11.8MB/s), io=33.4MiB (35.0MB), run=1001-1001msec 00:12:59.489 WRITE: bw=37.6MiB/s (39.4MB/s), 7193KiB/s-12.0MiB/s (7365kB/s-12.6MB/s), io=37.6MiB (39.4MB), run=1001-1001msec 00:12:59.489 00:12:59.489 Disk stats (read/write): 00:12:59.489 nvme0n1: ios=2610/2590, merge=0/0, ticks=488/347, in_queue=835, util=88.58% 00:12:59.489 nvme0n2: ios=1571/1746, merge=0/0, ticks=477/354, in_queue=831, util=88.06% 00:12:59.489 nvme0n3: ios=2099/2560, merge=0/0, ticks=399/393, in_queue=792, util=88.93% 00:12:59.489 nvme0n4: ios=1500/1536, merge=0/0, ticks=479/325, in_queue=804, util=88.43% 00:12:59.489 23:13:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:59.489 [global] 00:12:59.489 thread=1 00:12:59.489 invalidate=1 00:12:59.489 rw=write 00:12:59.489 time_based=1 00:12:59.489 runtime=1 00:12:59.489 ioengine=libaio 00:12:59.489 direct=1 00:12:59.489 bs=4096 00:12:59.489 iodepth=128 00:12:59.489 norandommap=0 00:12:59.489 numjobs=1 00:12:59.489 00:12:59.489 verify_dump=1 00:12:59.489 verify_backlog=512 00:12:59.489 verify_state_save=0 00:12:59.489 do_verify=1 00:12:59.489 verify=crc32c-intel 00:12:59.489 [job0] 00:12:59.489 filename=/dev/nvme0n1 00:12:59.489 [job1] 00:12:59.489 filename=/dev/nvme0n2 00:12:59.489 [job2] 00:12:59.489 filename=/dev/nvme0n3 00:12:59.489 [job3] 00:12:59.489 filename=/dev/nvme0n4 00:12:59.489 Could not set queue depth (nvme0n1) 00:12:59.489 Could not set queue depth (nvme0n2) 00:12:59.489 Could not set queue depth (nvme0n3) 00:12:59.489 Could not set queue depth (nvme0n4) 00:12:59.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:59.489 fio-3.35 00:12:59.489 Starting 4 threads 00:13:00.864 00:13:00.864 job0: (groupid=0, jobs=1): err= 0: pid=68975: Wed Jul 24 23:13:22 2024 00:13:00.864 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:13:00.864 slat (usec): min=7, max=10430, avg=311.54, stdev=1645.63 00:13:00.864 clat (usec): min=29481, max=43045, avg=40515.60, stdev=1837.98 00:13:00.864 lat (usec): min=39269, max=43072, avg=40827.14, stdev=812.96 00:13:00.864 clat percentiles (usec): 00:13:00.864 | 1.00th=[31065], 5.00th=[39584], 10.00th=[39584], 20.00th=[40109], 00:13:00.864 | 30.00th=[40109], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:13:00.864 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42730], 00:13:00.864 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:13:00.864 | 99.99th=[43254] 00:13:00.864 write: IOPS=1649, BW=6596KiB/s (6755kB/s)(6616KiB/1003msec); 0 zone resets 00:13:00.864 slat (usec): min=16, max=10373, avg=306.49, stdev=1574.85 00:13:00.864 clat (usec): min=2038, max=42096, avg=38218.29, stdev=6521.20 00:13:00.864 lat (usec): min=2063, max=42143, avg=38524.77, stdev=6354.59 00:13:00.864 clat percentiles (usec): 00:13:00.864 | 1.00th=[ 2573], 5.00th=[22152], 10.00th=[32900], 20.00th=[38536], 00:13:00.864 | 30.00th=[39584], 40.00th=[39584], 50.00th=[40109], 60.00th=[40109], 00:13:00.864 | 70.00th=[40109], 80.00th=[40633], 90.00th=[41157], 95.00th=[41681], 00:13:00.864 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:00.864 | 99.99th=[42206] 00:13:00.864 bw ( KiB/s): min= 4600, max= 7688, per=12.87%, avg=6144.00, stdev=2183.55, samples=2 00:13:00.864 iops : min= 1150, max= 1922, avg=1536.00, stdev=545.89, samples=2 00:13:00.864 lat (msec) : 4=0.69%, 20=1.00%, 50=98.31% 00:13:00.864 cpu : usr=2.79%, sys=3.69%, ctx=100, majf=0, minf=9 00:13:00.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:00.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.864 issued rwts: total=1536,1654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.864 job1: (groupid=0, jobs=1): err= 0: pid=68976: Wed Jul 24 23:13:22 2024 00:13:00.864 read: IOPS=4425, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1003msec) 00:13:00.864 slat (usec): min=10, max=5860, avg=109.18, stdev=479.89 00:13:00.864 clat (usec): min=2479, max=20349, avg=14346.79, stdev=1571.60 00:13:00.864 lat (usec): min=4272, max=20382, avg=14455.98, stdev=1575.52 00:13:00.864 clat percentiles (usec): 00:13:00.864 | 1.00th=[ 9110], 5.00th=[11863], 10.00th=[12911], 20.00th=[13566], 00:13:00.864 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:13:00.864 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16581], 00:13:00.864 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19530], 99.95th=[19792], 00:13:00.864 | 99.99th=[20317] 00:13:00.864 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:13:00.864 slat (usec): min=9, max=6456, avg=103.61, stdev=585.82 00:13:00.865 clat (usec): min=6487, max=20968, avg=13680.74, stdev=1499.26 00:13:00.865 lat (usec): min=6512, max=20988, avg=13784.35, stdev=1591.09 00:13:00.865 clat percentiles (usec): 00:13:00.865 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[12256], 20.00th=[12780], 00:13:00.865 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13960], 00:13:00.865 | 70.00th=[14091], 80.00th=[14615], 90.00th=[15533], 95.00th=[15926], 00:13:00.865 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20317], 99.95th=[20841], 00:13:00.865 | 99.99th=[20841] 00:13:00.865 bw ( KiB/s): min=17704, max=19160, per=38.62%, avg=18432.00, stdev=1029.55, samples=2 00:13:00.865 iops : min= 4426, max= 4790, avg=4608.00, stdev=257.39, samples=2 00:13:00.865 lat (msec) : 4=0.01%, 10=1.56%, 20=98.24%, 50=0.19% 00:13:00.865 cpu : usr=3.89%, sys=13.37%, ctx=345, majf=0, minf=13 00:13:00.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:00.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.865 issued rwts: total=4439,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.865 job2: (groupid=0, jobs=1): err= 0: pid=68977: Wed Jul 24 23:13:22 2024 00:13:00.865 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:13:00.865 slat (usec): min=6, max=10334, avg=311.83, stdev=1643.38 00:13:00.865 clat (usec): min=29661, max=43122, avg=40475.00, stdev=1828.73 00:13:00.865 lat (usec): min=39389, max=43136, avg=40786.83, stdev=794.97 00:13:00.865 clat percentiles (usec): 00:13:00.865 | 1.00th=[31065], 5.00th=[39584], 10.00th=[39584], 20.00th=[40109], 00:13:00.865 | 30.00th=[40109], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:13:00.865 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:13:00.865 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:00.865 | 99.99th=[43254] 00:13:00.865 write: IOPS=1624, BW=6500KiB/s (6655kB/s)(6532KiB/1005msec); 0 zone resets 00:13:00.865 slat (usec): min=13, max=10095, avg=310.31, stdev=1577.28 00:13:00.865 clat (usec): min=2554, max=41974, avg=38807.84, stdev=4981.43 00:13:00.865 lat (usec): min=11623, max=42517, avg=39118.14, stdev=4726.93 00:13:00.865 clat percentiles (usec): 00:13:00.865 | 1.00th=[12125], 5.00th=[30802], 10.00th=[38536], 20.00th=[39060], 00:13:00.865 | 30.00th=[39584], 40.00th=[40109], 50.00th=[40109], 60.00th=[40109], 00:13:00.865 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41157], 95.00th=[41681], 00:13:00.865 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:00.865 | 99.99th=[42206] 00:13:00.865 bw ( KiB/s): min= 4600, max= 7688, per=12.87%, avg=6144.00, stdev=2183.55, samples=2 00:13:00.865 iops : min= 1150, max= 1922, avg=1536.00, stdev=545.89, samples=2 00:13:00.865 lat (msec) : 4=0.03%, 20=1.01%, 50=98.96% 00:13:00.865 cpu : usr=2.19%, sys=5.48%, ctx=100, majf=0, minf=11 00:13:00.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:13:00.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.865 issued rwts: total=1536,1633,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.865 job3: (groupid=0, jobs=1): err= 0: pid=68978: Wed Jul 24 23:13:22 2024 00:13:00.865 read: IOPS=3609, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1002msec) 00:13:00.865 slat (usec): min=11, max=3688, avg=126.57, stdev=485.36 00:13:00.865 clat (usec): min=819, max=19754, avg=16676.42, stdev=1531.40 00:13:00.865 lat (usec): min=3228, max=19770, avg=16802.99, stdev=1456.70 00:13:00.865 clat percentiles (usec): 00:13:00.865 | 1.00th=[12911], 5.00th=[14484], 10.00th=[15926], 20.00th=[16319], 00:13:00.865 | 30.00th=[16450], 40.00th=[16712], 50.00th=[16909], 60.00th=[16909], 00:13:00.865 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[18220], 00:13:00.865 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:13:00.865 | 99.99th=[19792] 00:13:00.865 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:13:00.865 slat (usec): min=14, max=3947, avg=123.88, stdev=543.07 00:13:00.865 clat (usec): min=7015, max=19815, avg=16129.08, stdev=1266.56 00:13:00.865 lat (usec): min=7042, max=19841, avg=16252.96, stdev=1182.17 00:13:00.865 clat percentiles (usec): 00:13:00.865 | 1.00th=[11338], 5.00th=[14615], 10.00th=[15139], 20.00th=[15664], 00:13:00.865 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:13:00.865 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17171], 95.00th=[17695], 00:13:00.865 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19792], 99.95th=[19792], 00:13:00.865 | 99.99th=[19792] 00:13:00.865 bw ( KiB/s): min=15624, max=16384, per=33.53%, avg=16004.00, stdev=537.40, samples=2 00:13:00.865 iops : min= 3906, max= 4096, avg=4001.00, stdev=134.35, samples=2 00:13:00.865 lat (usec) : 1000=0.01% 00:13:00.865 lat (msec) : 4=0.34%, 10=0.41%, 20=99.24% 00:13:00.865 cpu : usr=4.00%, sys=12.79%, ctx=359, majf=0, minf=17 00:13:00.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:00.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:00.865 issued rwts: total=3617,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:00.865 00:13:00.865 Run status group 0 (all jobs): 00:13:00.865 READ: bw=43.3MiB/s (45.4MB/s), 6113KiB/s-17.3MiB/s (6260kB/s-18.1MB/s), io=43.5MiB (45.6MB), run=1002-1005msec 00:13:00.865 WRITE: bw=46.6MiB/s (48.9MB/s), 6500KiB/s-17.9MiB/s (6655kB/s-18.8MB/s), io=46.8MiB (49.1MB), run=1002-1005msec 00:13:00.865 00:13:00.865 Disk stats (read/write): 00:13:00.865 nvme0n1: ios=1234/1536, merge=0/0, ticks=10897/12876, in_queue=23773, util=88.28% 00:13:00.865 nvme0n2: ios=3750/4096, merge=0/0, ticks=26097/23833, in_queue=49930, util=89.30% 00:13:00.865 nvme0n3: ios=1201/1536, merge=0/0, ticks=11748/14398, in_queue=26146, util=89.25% 00:13:00.865 nvme0n4: ios=3104/3584, merge=0/0, ticks=12112/12571, in_queue=24683, util=89.60% 00:13:00.865 23:13:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:00.865 [global] 00:13:00.865 thread=1 00:13:00.865 invalidate=1 00:13:00.865 rw=randwrite 00:13:00.865 time_based=1 00:13:00.865 runtime=1 00:13:00.865 ioengine=libaio 00:13:00.865 direct=1 00:13:00.865 bs=4096 00:13:00.865 iodepth=128 00:13:00.865 norandommap=0 00:13:00.865 numjobs=1 00:13:00.865 00:13:00.865 verify_dump=1 00:13:00.865 verify_backlog=512 00:13:00.865 verify_state_save=0 00:13:00.865 do_verify=1 00:13:00.865 verify=crc32c-intel 00:13:00.865 [job0] 00:13:00.865 filename=/dev/nvme0n1 00:13:00.865 [job1] 00:13:00.865 filename=/dev/nvme0n2 00:13:00.865 [job2] 00:13:00.865 filename=/dev/nvme0n3 00:13:00.865 [job3] 00:13:00.865 filename=/dev/nvme0n4 00:13:00.865 Could not set queue depth (nvme0n1) 00:13:00.865 Could not set queue depth (nvme0n2) 00:13:00.865 Could not set queue depth (nvme0n3) 00:13:00.865 Could not set queue depth (nvme0n4) 00:13:00.865 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.865 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.865 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.865 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:00.865 fio-3.35 00:13:00.865 Starting 4 threads 00:13:02.240 00:13:02.240 job0: (groupid=0, jobs=1): err= 0: pid=69036: Wed Jul 24 23:13:24 2024 00:13:02.240 read: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1006msec) 00:13:02.240 slat (usec): min=8, max=17425, avg=120.37, stdev=802.34 00:13:02.240 clat (usec): min=4255, max=37163, avg=16693.02, stdev=3202.06 00:13:02.240 lat (usec): min=8752, max=37188, avg=16813.39, stdev=3229.12 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[ 9372], 5.00th=[11338], 10.00th=[14615], 20.00th=[15270], 00:13:02.240 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16319], 60.00th=[16581], 00:13:02.240 | 70.00th=[16909], 80.00th=[17433], 90.00th=[19530], 95.00th=[24511], 00:13:02.240 | 99.00th=[29230], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:13:02.240 | 99.99th=[36963] 00:13:02.240 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:13:02.240 slat (usec): min=11, max=18466, avg=120.90, stdev=777.54 00:13:02.240 clat (usec): min=6813, max=33003, avg=15179.27, stdev=3084.75 00:13:02.240 lat (usec): min=9995, max=33041, avg=15300.17, stdev=3021.28 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[ 9765], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:13:02.240 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14746], 60.00th=[15139], 00:13:02.240 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16581], 95.00th=[19792], 00:13:02.240 | 99.00th=[32113], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:13:02.240 | 99.99th=[32900] 00:13:02.240 bw ( KiB/s): min=16384, max=16384, per=27.17%, avg=16384.00, stdev= 0.00, samples=2 00:13:02.240 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:02.240 lat (msec) : 10=1.79%, 20=91.68%, 50=6.53% 00:13:02.240 cpu : usr=4.68%, sys=11.24%, ctx=174, majf=0, minf=4 00:13:02.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:02.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.240 issued rwts: total=3911,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.240 job1: (groupid=0, jobs=1): err= 0: pid=69037: Wed Jul 24 23:13:24 2024 00:13:02.240 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:13:02.240 slat (usec): min=6, max=8214, avg=110.34, stdev=708.97 00:13:02.240 clat (usec): min=7226, max=24403, avg=15331.37, stdev=2159.02 00:13:02.240 lat (usec): min=7243, max=29755, avg=15441.71, stdev=2191.43 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[ 9503], 5.00th=[11207], 10.00th=[11863], 20.00th=[14615], 00:13:02.240 | 30.00th=[15008], 40.00th=[15401], 50.00th=[15795], 60.00th=[16057], 00:13:02.240 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17433], 00:13:02.240 | 99.00th=[23725], 99.50th=[23987], 99.90th=[24249], 99.95th=[24511], 00:13:02.240 | 99.99th=[24511] 00:13:02.240 write: IOPS=4597, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:02.240 slat (usec): min=5, max=11584, avg=111.15, stdev=679.73 00:13:02.240 clat (usec): min=547, max=21292, avg=13913.95, stdev=2255.13 00:13:02.240 lat (usec): min=4873, max=21310, avg=14025.09, stdev=2180.42 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[ 5866], 5.00th=[10028], 10.00th=[10945], 20.00th=[12649], 00:13:02.240 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14222], 60.00th=[14615], 00:13:02.240 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:13:02.240 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21365], 99.95th=[21365], 00:13:02.240 | 99.99th=[21365] 00:13:02.240 bw ( KiB/s): min=16440, max=16440, per=27.27%, avg=16440.00, stdev= 0.00, samples=1 00:13:02.240 iops : min= 4110, max= 4110, avg=4110.00, stdev= 0.00, samples=1 00:13:02.240 lat (usec) : 750=0.01% 00:13:02.240 lat (msec) : 10=3.37%, 20=95.04%, 50=1.59% 00:13:02.240 cpu : usr=4.60%, sys=12.09%, ctx=186, majf=0, minf=1 00:13:02.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:02.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.240 issued rwts: total=4096,4607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.240 job2: (groupid=0, jobs=1): err= 0: pid=69044: Wed Jul 24 23:13:24 2024 00:13:02.240 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:02.240 slat (usec): min=6, max=9497, avg=125.70, stdev=827.35 00:13:02.240 clat (usec): min=8071, max=30062, avg=17555.07, stdev=2632.56 00:13:02.240 lat (usec): min=8086, max=35346, avg=17680.77, stdev=2666.59 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[10552], 5.00th=[12780], 10.00th=[13173], 20.00th=[16319], 00:13:02.240 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[18482], 00:13:02.240 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19530], 95.00th=[19792], 00:13:02.240 | 99.00th=[27395], 99.50th=[28705], 99.90th=[30016], 99.95th=[30016], 00:13:02.240 | 99.99th=[30016] 00:13:02.240 write: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1002msec); 0 zone resets 00:13:02.240 slat (usec): min=10, max=13798, avg=132.58, stdev=841.06 00:13:02.240 clat (usec): min=405, max=23939, avg=16404.93, stdev=2766.63 00:13:02.240 lat (usec): min=5812, max=24203, avg=16537.51, stdev=2676.27 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[ 6587], 5.00th=[11338], 10.00th=[13173], 20.00th=[15008], 00:13:02.240 | 30.00th=[15795], 40.00th=[16450], 50.00th=[17171], 60.00th=[17433], 00:13:02.240 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18744], 95.00th=[19530], 00:13:02.240 | 99.00th=[22676], 99.50th=[22676], 99.90th=[23987], 99.95th=[23987], 00:13:02.240 | 99.99th=[23987] 00:13:02.240 bw ( KiB/s): min=13808, max=16376, per=25.03%, avg=15092.00, stdev=1815.85, samples=2 00:13:02.240 iops : min= 3452, max= 4094, avg=3773.00, stdev=453.96, samples=2 00:13:02.240 lat (usec) : 500=0.01% 00:13:02.240 lat (msec) : 10=2.26%, 20=93.68%, 50=4.05% 00:13:02.240 cpu : usr=3.80%, sys=11.19%, ctx=167, majf=0, minf=7 00:13:02.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:02.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.240 issued rwts: total=3584,3901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.240 job3: (groupid=0, jobs=1): err= 0: pid=69045: Wed Jul 24 23:13:24 2024 00:13:02.240 read: IOPS=2407, BW=9630KiB/s (9861kB/s)(9688KiB/1006msec) 00:13:02.240 slat (usec): min=5, max=27416, avg=204.83, stdev=1509.43 00:13:02.240 clat (usec): min=4279, max=54165, avg=26594.59, stdev=5680.84 00:13:02.240 lat (usec): min=8361, max=54178, avg=26799.42, stdev=5731.24 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[15533], 5.00th=[17695], 10.00th=[22414], 20.00th=[25035], 00:13:02.240 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:13:02.240 | 70.00th=[26870], 80.00th=[27395], 90.00th=[28705], 95.00th=[36439], 00:13:02.240 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:13:02.240 | 99.99th=[54264] 00:13:02.240 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:13:02.240 slat (usec): min=6, max=22883, avg=189.11, stdev=1389.29 00:13:02.240 clat (usec): min=4001, max=52056, avg=24597.45, stdev=3554.37 00:13:02.240 lat (usec): min=4042, max=52079, avg=24786.57, stdev=3409.36 00:13:02.240 clat percentiles (usec): 00:13:02.240 | 1.00th=[14484], 5.00th=[21627], 10.00th=[21890], 20.00th=[22938], 00:13:02.240 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:13:02.240 | 70.00th=[25297], 80.00th=[25822], 90.00th=[26346], 95.00th=[32637], 00:13:02.240 | 99.00th=[34866], 99.50th=[34866], 99.90th=[52167], 99.95th=[52167], 00:13:02.240 | 99.99th=[52167] 00:13:02.240 bw ( KiB/s): min= 9224, max=11278, per=17.00%, avg=10251.00, stdev=1452.40, samples=2 00:13:02.240 iops : min= 2306, max= 2819, avg=2562.50, stdev=362.75, samples=2 00:13:02.240 lat (msec) : 10=0.16%, 20=5.68%, 50=92.87%, 100=1.28% 00:13:02.240 cpu : usr=1.99%, sys=7.86%, ctx=111, majf=0, minf=9 00:13:02.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:02.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.240 issued rwts: total=2422,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.240 00:13:02.240 Run status group 0 (all jobs): 00:13:02.241 READ: bw=54.4MiB/s (57.1MB/s), 9630KiB/s-16.0MiB/s (9861kB/s-16.7MB/s), io=54.7MiB (57.4MB), run=1002-1006msec 00:13:02.241 WRITE: bw=58.9MiB/s (61.7MB/s), 9.94MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=59.2MiB (62.1MB), run=1002-1006msec 00:13:02.241 00:13:02.241 Disk stats (read/write): 00:13:02.241 nvme0n1: ios=3322/3584, merge=0/0, ticks=50392/48956, in_queue=99348, util=86.77% 00:13:02.241 nvme0n2: ios=3424/3584, merge=0/0, ticks=51245/48236, in_queue=99481, util=86.27% 00:13:02.241 nvme0n3: ios=2947/3072, merge=0/0, ticks=51086/48753, in_queue=99839, util=88.56% 00:13:02.241 nvme0n4: ios=2048/2172, merge=0/0, ticks=51322/49981, in_queue=101303, util=89.19% 00:13:02.241 23:13:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:02.241 23:13:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69058 00:13:02.241 23:13:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:02.241 23:13:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:02.241 [global] 00:13:02.241 thread=1 00:13:02.241 invalidate=1 00:13:02.241 rw=read 00:13:02.241 time_based=1 00:13:02.241 runtime=10 00:13:02.241 ioengine=libaio 00:13:02.241 direct=1 00:13:02.241 bs=4096 00:13:02.241 iodepth=1 00:13:02.241 norandommap=1 00:13:02.241 numjobs=1 00:13:02.241 00:13:02.241 [job0] 00:13:02.241 filename=/dev/nvme0n1 00:13:02.241 [job1] 00:13:02.241 filename=/dev/nvme0n2 00:13:02.241 [job2] 00:13:02.241 filename=/dev/nvme0n3 00:13:02.241 [job3] 00:13:02.241 filename=/dev/nvme0n4 00:13:02.241 Could not set queue depth (nvme0n1) 00:13:02.241 Could not set queue depth (nvme0n2) 00:13:02.241 Could not set queue depth (nvme0n3) 00:13:02.241 Could not set queue depth (nvme0n4) 00:13:02.241 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.241 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.241 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.241 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:02.241 fio-3.35 00:13:02.241 Starting 4 threads 00:13:05.629 23:13:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:05.629 fio: pid=69101, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:05.629 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=55894016, buflen=4096 00:13:05.629 23:13:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:05.629 fio: pid=69100, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:05.629 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=57962496, buflen=4096 00:13:05.629 23:13:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:05.629 23:13:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:05.887 fio: pid=69098, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:05.887 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=45027328, buflen=4096 00:13:05.887 23:13:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:05.887 23:13:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:06.146 fio: pid=69099, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:06.146 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=51175424, buflen=4096 00:13:06.146 00:13:06.146 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69098: Wed Jul 24 23:13:28 2024 00:13:06.146 read: IOPS=3132, BW=12.2MiB/s (12.8MB/s)(42.9MiB/3510msec) 00:13:06.146 slat (usec): min=8, max=14073, avg=19.65, stdev=191.02 00:13:06.146 clat (usec): min=133, max=2364, avg=297.86, stdev=95.48 00:13:06.146 lat (usec): min=146, max=14497, avg=317.51, stdev=215.09 00:13:06.146 clat percentiles (usec): 00:13:06.146 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 221], 00:13:06.146 | 30.00th=[ 237], 40.00th=[ 253], 50.00th=[ 285], 60.00th=[ 347], 00:13:06.146 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 424], 00:13:06.146 | 99.00th=[ 469], 99.50th=[ 490], 99.90th=[ 594], 99.95th=[ 963], 00:13:06.146 | 99.99th=[ 1532] 00:13:06.146 bw ( KiB/s): min= 9536, max=17648, per=23.36%, avg=12656.00, stdev=3898.68, samples=6 00:13:06.146 iops : min= 2384, max= 4412, avg=3164.00, stdev=974.67, samples=6 00:13:06.146 lat (usec) : 250=38.28%, 500=61.33%, 750=0.31%, 1000=0.03% 00:13:06.146 lat (msec) : 2=0.04%, 4=0.01% 00:13:06.146 cpu : usr=1.20%, sys=4.70%, ctx=11004, majf=0, minf=1 00:13:06.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.146 issued rwts: total=10994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.146 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69099: Wed Jul 24 23:13:28 2024 00:13:06.146 read: IOPS=3300, BW=12.9MiB/s (13.5MB/s)(48.8MiB/3786msec) 00:13:06.146 slat (usec): min=8, max=14721, avg=21.29, stdev=203.16 00:13:06.146 clat (usec): min=137, max=8061, avg=279.88, stdev=128.63 00:13:06.146 lat (usec): min=150, max=14991, avg=301.18, stdev=242.04 00:13:06.146 clat percentiles (usec): 00:13:06.146 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 178], 00:13:06.146 | 30.00th=[ 204], 40.00th=[ 227], 50.00th=[ 249], 60.00th=[ 322], 00:13:06.146 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 404], 95.00th=[ 420], 00:13:06.146 | 99.00th=[ 461], 99.50th=[ 490], 99.90th=[ 881], 99.95th=[ 1450], 00:13:06.146 | 99.99th=[ 3818] 00:13:06.146 bw ( KiB/s): min= 9688, max=20880, per=23.95%, avg=12976.86, stdev=4266.33, samples=7 00:13:06.146 iops : min= 2422, max= 5220, avg=3244.14, stdev=1066.59, samples=7 00:13:06.146 lat (usec) : 250=50.42%, 500=49.15%, 750=0.31%, 1000=0.03% 00:13:06.146 lat (msec) : 2=0.03%, 4=0.04%, 10=0.01% 00:13:06.146 cpu : usr=1.27%, sys=5.47%, ctx=12504, majf=0, minf=1 00:13:06.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.146 issued rwts: total=12495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.146 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69100: Wed Jul 24 23:13:28 2024 00:13:06.146 read: IOPS=4361, BW=17.0MiB/s (17.9MB/s)(55.3MiB/3245msec) 00:13:06.146 slat (usec): min=11, max=8837, avg=16.89, stdev=97.47 00:13:06.146 clat (usec): min=150, max=4490, avg=210.81, stdev=65.23 00:13:06.146 lat (usec): min=164, max=9066, avg=227.70, stdev=117.99 00:13:06.146 clat percentiles (usec): 00:13:06.146 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:13:06.146 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 210], 00:13:06.146 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[ 265], 95.00th=[ 281], 00:13:06.146 | 99.00th=[ 326], 99.50th=[ 392], 99.90th=[ 652], 99.95th=[ 1037], 00:13:06.146 | 99.99th=[ 2933] 00:13:06.146 bw ( KiB/s): min=15416, max=20288, per=32.52%, avg=17618.67, stdev=2000.33, samples=6 00:13:06.146 iops : min= 3854, max= 5072, avg=4404.67, stdev=500.08, samples=6 00:13:06.146 lat (usec) : 250=85.35%, 500=14.38%, 750=0.17%, 1000=0.03% 00:13:06.146 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:13:06.146 cpu : usr=1.29%, sys=5.98%, ctx=14156, majf=0, minf=1 00:13:06.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.146 issued rwts: total=14152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.146 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69101: Wed Jul 24 23:13:28 2024 00:13:06.146 read: IOPS=4616, BW=18.0MiB/s (18.9MB/s)(53.3MiB/2956msec) 00:13:06.146 slat (usec): min=11, max=106, avg=13.67, stdev= 2.52 00:13:06.146 clat (usec): min=146, max=2186, avg=201.42, stdev=48.19 00:13:06.146 lat (usec): min=159, max=2200, avg=215.09, stdev=48.36 00:13:06.146 clat percentiles (usec): 00:13:06.146 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:13:06.146 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:13:06.146 | 70.00th=[ 210], 80.00th=[ 229], 90.00th=[ 255], 95.00th=[ 273], 00:13:06.146 | 99.00th=[ 310], 99.50th=[ 322], 99.90th=[ 383], 99.95th=[ 766], 00:13:06.146 | 99.99th=[ 2180] 00:13:06.146 bw ( KiB/s): min=17024, max=21528, per=34.42%, avg=18648.00, stdev=1930.20, samples=5 00:13:06.146 iops : min= 4256, max= 5382, avg=4662.00, stdev=482.55, samples=5 00:13:06.147 lat (usec) : 250=88.14%, 500=11.77%, 750=0.03%, 1000=0.02% 00:13:06.147 lat (msec) : 2=0.01%, 4=0.01% 00:13:06.147 cpu : usr=1.35%, sys=5.72%, ctx=13648, majf=0, minf=1 00:13:06.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:06.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:06.147 issued rwts: total=13647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:06.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:06.147 00:13:06.147 Run status group 0 (all jobs): 00:13:06.147 READ: bw=52.9MiB/s (55.5MB/s), 12.2MiB/s-18.0MiB/s (12.8MB/s-18.9MB/s), io=200MiB (210MB), run=2956-3786msec 00:13:06.147 00:13:06.147 Disk stats (read/write): 00:13:06.147 nvme0n1: ios=10533/0, merge=0/0, ticks=3013/0, in_queue=3013, util=95.28% 00:13:06.147 nvme0n2: ios=11696/0, merge=0/0, ticks=3269/0, in_queue=3269, util=95.26% 00:13:06.147 nvme0n3: ios=13616/0, merge=0/0, ticks=2901/0, in_queue=2901, util=96.40% 00:13:06.147 nvme0n4: ios=13264/0, merge=0/0, ticks=2679/0, in_queue=2679, util=96.76% 00:13:06.147 23:13:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.147 23:13:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:06.406 23:13:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.406 23:13:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:06.664 23:13:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:06.664 23:13:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:07.230 23:13:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.230 23:13:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:07.488 23:13:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:07.488 23:13:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 69058 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.747 nvmf hotplug test: fio failed as expected 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:07.747 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.005 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.006 rmmod nvme_tcp 00:13:08.006 rmmod nvme_fabrics 00:13:08.006 rmmod nvme_keyring 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68672 ']' 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68672 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68672 ']' 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68672 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68672 00:13:08.006 killing process with pid 68672 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68672' 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68672 00:13:08.006 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68672 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:08.573 00:13:08.573 real 0m20.030s 00:13:08.573 user 1m15.993s 00:13:08.573 sys 0m9.904s 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.573 ************************************ 00:13:08.573 END TEST nvmf_fio_target 00:13:08.573 23:13:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.573 ************************************ 00:13:08.573 23:13:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.573 23:13:30 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:08.573 23:13:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.573 23:13:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.573 23:13:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.573 ************************************ 00:13:08.573 START TEST nvmf_bdevio 00:13:08.573 ************************************ 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:08.573 * Looking for test storage... 00:13:08.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:08.573 23:13:30 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.573 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:08.574 Cannot find device "nvmf_tgt_br" 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.574 Cannot find device "nvmf_tgt_br2" 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:08.574 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:08.832 Cannot find device "nvmf_tgt_br" 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:08.832 Cannot find device "nvmf_tgt_br2" 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:08.832 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:09.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:13:09.090 00:13:09.090 --- 10.0.0.2 ping statistics --- 00:13:09.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.090 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:09.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:13:09.090 00:13:09.090 --- 10.0.0.3 ping statistics --- 00:13:09.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.090 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:13:09.090 00:13:09.090 --- 10.0.0.1 ping statistics --- 00:13:09.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.090 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:09.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69371 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69371 00:13:09.090 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69371 ']' 00:13:09.091 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.091 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.091 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.091 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.091 23:13:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:09.091 [2024-07-24 23:13:31.423993] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:13:09.091 [2024-07-24 23:13:31.424418] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.091 [2024-07-24 23:13:31.563371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.349 [2024-07-24 23:13:31.718222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.349 [2024-07-24 23:13:31.718773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.349 [2024-07-24 23:13:31.719234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.349 [2024-07-24 23:13:31.719665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.349 [2024-07-24 23:13:31.719867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.349 [2024-07-24 23:13:31.720311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:09.349 [2024-07-24 23:13:31.720470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:09.349 [2024-07-24 23:13:31.720613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:09.349 [2024-07-24 23:13:31.720848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.349 [2024-07-24 23:13:31.799200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:09.916 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.916 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:13:09.916 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.916 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:09.916 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.175 [2024-07-24 23:13:32.427184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.175 Malloc0 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.175 [2024-07-24 23:13:32.511519] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:10.175 { 00:13:10.175 "params": { 00:13:10.175 "name": "Nvme$subsystem", 00:13:10.175 "trtype": "$TEST_TRANSPORT", 00:13:10.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:10.175 "adrfam": "ipv4", 00:13:10.175 "trsvcid": "$NVMF_PORT", 00:13:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:10.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:10.175 "hdgst": ${hdgst:-false}, 00:13:10.175 "ddgst": ${ddgst:-false} 00:13:10.175 }, 00:13:10.175 "method": "bdev_nvme_attach_controller" 00:13:10.175 } 00:13:10.175 EOF 00:13:10.175 )") 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:10.175 23:13:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:10.175 "params": { 00:13:10.175 "name": "Nvme1", 00:13:10.175 "trtype": "tcp", 00:13:10.175 "traddr": "10.0.0.2", 00:13:10.175 "adrfam": "ipv4", 00:13:10.175 "trsvcid": "4420", 00:13:10.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:10.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:10.175 "hdgst": false, 00:13:10.175 "ddgst": false 00:13:10.175 }, 00:13:10.175 "method": "bdev_nvme_attach_controller" 00:13:10.175 }' 00:13:10.175 [2024-07-24 23:13:32.579542] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:13:10.175 [2024-07-24 23:13:32.579675] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69407 ] 00:13:10.433 [2024-07-24 23:13:32.721317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.433 [2024-07-24 23:13:32.881472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.433 [2024-07-24 23:13:32.881560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.433 [2024-07-24 23:13:32.881567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.691 [2024-07-24 23:13:32.974425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:10.691 I/O targets: 00:13:10.691 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:10.691 00:13:10.691 00:13:10.691 CUnit - A unit testing framework for C - Version 2.1-3 00:13:10.691 http://cunit.sourceforge.net/ 00:13:10.691 00:13:10.691 00:13:10.691 Suite: bdevio tests on: Nvme1n1 00:13:10.691 Test: blockdev write read block ...passed 00:13:10.691 Test: blockdev write zeroes read block ...passed 00:13:10.691 Test: blockdev write zeroes read no split ...passed 00:13:10.691 Test: blockdev write zeroes read split ...passed 00:13:10.691 Test: blockdev write zeroes read split partial ...passed 00:13:10.691 Test: blockdev reset ...[2024-07-24 23:13:33.145757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:10.691 [2024-07-24 23:13:33.145907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d47c0 (9): Bad file descriptor 00:13:10.691 [2024-07-24 23:13:33.158036] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:10.691 passed 00:13:10.691 Test: blockdev write read 8 blocks ...passed 00:13:10.691 Test: blockdev write read size > 128k ...passed 00:13:10.691 Test: blockdev write read invalid size ...passed 00:13:10.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.691 Test: blockdev write read max offset ...passed 00:13:10.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.691 Test: blockdev writev readv 8 blocks ...passed 00:13:10.691 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.691 Test: blockdev writev readv block ...passed 00:13:10.691 Test: blockdev writev readv size > 128k ...passed 00:13:10.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.691 Test: blockdev comparev and writev ...[2024-07-24 23:13:33.167292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.691 [2024-07-24 23:13:33.167671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.167859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.168003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.168699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.168871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.169003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.169115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.169725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.169974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.170154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.170264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.170890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.170931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.170968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.692 [2024-07-24 23:13:33.170985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:10.692 passed 00:13:10.692 Test: blockdev nvme passthru rw ...passed 00:13:10.692 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:13:33.172442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.692 [2024-07-24 23:13:33.172752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.173071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.692 [2024-07-24 23:13:33.173316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.173750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.692 [2024-07-24 23:13:33.173885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:10.692 [2024-07-24 23:13:33.174178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.692 passed 00:13:10.692 Test: blockdev nvme admin passthru ...[2024-07-24 23:13:33.174211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:10.950 passed 00:13:10.950 Test: blockdev copy ...passed 00:13:10.950 00:13:10.950 Run Summary: Type Total Ran Passed Failed Inactive 00:13:10.950 suites 1 1 n/a 0 0 00:13:10.950 tests 23 23 23 0 0 00:13:10.950 asserts 152 152 152 0 n/a 00:13:10.950 00:13:10.950 Elapsed time = 0.155 seconds 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.208 rmmod nvme_tcp 00:13:11.208 rmmod nvme_fabrics 00:13:11.208 rmmod nvme_keyring 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69371 ']' 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69371 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69371 ']' 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69371 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69371 00:13:11.208 killing process with pid 69371 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69371' 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69371 00:13:11.208 23:13:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69371 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:11.775 00:13:11.775 real 0m3.152s 00:13:11.775 user 0m10.571s 00:13:11.775 sys 0m0.915s 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.775 ************************************ 00:13:11.775 END TEST nvmf_bdevio 00:13:11.775 23:13:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:11.775 ************************************ 00:13:11.775 23:13:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:11.775 23:13:34 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:11.775 23:13:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:11.775 23:13:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.775 23:13:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.775 ************************************ 00:13:11.775 START TEST nvmf_auth_target 00:13:11.775 ************************************ 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:11.775 * Looking for test storage... 00:13:11.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.775 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:11.776 Cannot find device "nvmf_tgt_br" 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:13:11.776 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:12.038 Cannot find device "nvmf_tgt_br2" 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:12.038 Cannot find device "nvmf_tgt_br" 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:12.038 Cannot find device "nvmf_tgt_br2" 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:12.038 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:12.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:13:12.315 00:13:12.315 --- 10.0.0.2 ping statistics --- 00:13:12.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.315 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:12.315 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.315 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:12.315 00:13:12.315 --- 10.0.0.3 ping statistics --- 00:13:12.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.315 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:12.315 00:13:12.315 --- 10.0.0.1 ping statistics --- 00:13:12.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.315 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69590 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69590 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69590 ']' 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.315 23:13:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69622 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c76a302efc831bc5a07f5f7a5e9ee23725bdad8f0ca9c135 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.cmE 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c76a302efc831bc5a07f5f7a5e9ee23725bdad8f0ca9c135 0 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c76a302efc831bc5a07f5f7a5e9ee23725bdad8f0ca9c135 0 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c76a302efc831bc5a07f5f7a5e9ee23725bdad8f0ca9c135 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:13.248 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.cmE 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.cmE 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.cmE 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a0954448828e8a7a43992716ae69b85180859619bf98723a4a295fcb684240bc 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KZH 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a0954448828e8a7a43992716ae69b85180859619bf98723a4a295fcb684240bc 3 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a0954448828e8a7a43992716ae69b85180859619bf98723a4a295fcb684240bc 3 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a0954448828e8a7a43992716ae69b85180859619bf98723a4a295fcb684240bc 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KZH 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KZH 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.KZH 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ba02a73cdd5ce6bc7d4a5b8b85900845 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dIL 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ba02a73cdd5ce6bc7d4a5b8b85900845 1 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ba02a73cdd5ce6bc7d4a5b8b85900845 1 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ba02a73cdd5ce6bc7d4a5b8b85900845 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dIL 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dIL 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.dIL 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9e360973c46fb951b318ef6895299582554d241ca9e245b 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Pmf 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9e360973c46fb951b318ef6895299582554d241ca9e245b 2 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9e360973c46fb951b318ef6895299582554d241ca9e245b 2 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9e360973c46fb951b318ef6895299582554d241ca9e245b 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Pmf 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Pmf 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Pmf 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=483a0edb4e734c2c625627bbd3249285a9fafcc0544481ce 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y5u 00:13:13.507 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 483a0edb4e734c2c625627bbd3249285a9fafcc0544481ce 2 00:13:13.508 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 483a0edb4e734c2c625627bbd3249285a9fafcc0544481ce 2 00:13:13.508 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.508 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.508 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=483a0edb4e734c2c625627bbd3249285a9fafcc0544481ce 00:13:13.508 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:13.508 23:13:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y5u 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y5u 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Y5u 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8084c70d4a25f6255d5c28c3c2c0303b 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Yk5 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8084c70d4a25f6255d5c28c3c2c0303b 1 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8084c70d4a25f6255d5c28c3c2c0303b 1 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8084c70d4a25f6255d5c28c3c2c0303b 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Yk5 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Yk5 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Yk5 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c856c1cffe01e8404ab0b80ddda86c514c5921ea8be8b19fd64efaf54ebfc55e 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nij 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c856c1cffe01e8404ab0b80ddda86c514c5921ea8be8b19fd64efaf54ebfc55e 3 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c856c1cffe01e8404ab0b80ddda86c514c5921ea8be8b19fd64efaf54ebfc55e 3 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c856c1cffe01e8404ab0b80ddda86c514c5921ea8be8b19fd64efaf54ebfc55e 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nij 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nij 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.nij 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69590 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69590 ']' 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.766 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69622 /var/tmp/host.sock 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69622 ']' 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.023 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cmE 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cmE 00:13:14.281 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cmE 00:13:14.539 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.KZH ]] 00:13:14.539 23:13:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KZH 00:13:14.539 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.539 23:13:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.539 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.539 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KZH 00:13:14.539 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KZH 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dIL 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dIL 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dIL 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Pmf ]] 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pmf 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pmf 00:13:15.106 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Pmf 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5u 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5u 00:13:15.674 23:13:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Y5u 00:13:15.674 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Yk5 ]] 00:13:15.674 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yk5 00:13:15.674 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.674 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yk5 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yk5 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.nij 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.nij 00:13:15.935 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.nij 00:13:16.194 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:16.194 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:16.194 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.194 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:16.194 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.194 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.760 23:13:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.018 00:13:17.018 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.018 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.018 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.277 { 00:13:17.277 "cntlid": 1, 00:13:17.277 "qid": 0, 00:13:17.277 "state": "enabled", 00:13:17.277 "thread": "nvmf_tgt_poll_group_000", 00:13:17.277 "listen_address": { 00:13:17.277 "trtype": "TCP", 00:13:17.277 "adrfam": "IPv4", 00:13:17.277 "traddr": "10.0.0.2", 00:13:17.277 "trsvcid": "4420" 00:13:17.277 }, 00:13:17.277 "peer_address": { 00:13:17.277 "trtype": "TCP", 00:13:17.277 "adrfam": "IPv4", 00:13:17.277 "traddr": "10.0.0.1", 00:13:17.277 "trsvcid": "40792" 00:13:17.277 }, 00:13:17.277 "auth": { 00:13:17.277 "state": "completed", 00:13:17.277 "digest": "sha256", 00:13:17.277 "dhgroup": "null" 00:13:17.277 } 00:13:17.277 } 00:13:17.277 ]' 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:17.277 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.535 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.535 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.535 23:13:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.793 23:13:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:13:23.074 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.075 23:13:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.075 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.075 { 00:13:23.075 "cntlid": 3, 00:13:23.075 "qid": 0, 00:13:23.075 "state": "enabled", 00:13:23.075 "thread": "nvmf_tgt_poll_group_000", 00:13:23.075 "listen_address": { 00:13:23.075 "trtype": "TCP", 00:13:23.075 "adrfam": "IPv4", 00:13:23.075 "traddr": "10.0.0.2", 00:13:23.075 "trsvcid": "4420" 00:13:23.075 }, 00:13:23.075 "peer_address": { 00:13:23.075 "trtype": "TCP", 00:13:23.075 "adrfam": "IPv4", 00:13:23.075 "traddr": "10.0.0.1", 00:13:23.075 "trsvcid": "45278" 00:13:23.075 }, 00:13:23.075 "auth": { 00:13:23.075 "state": "completed", 00:13:23.075 "digest": "sha256", 00:13:23.075 "dhgroup": "null" 00:13:23.075 } 00:13:23.075 } 00:13:23.075 ]' 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:23.075 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.332 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.332 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.332 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.589 23:13:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:24.154 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.412 23:13:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.669 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.927 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.927 { 00:13:24.927 "cntlid": 5, 00:13:24.927 "qid": 0, 00:13:24.927 "state": "enabled", 00:13:24.927 "thread": "nvmf_tgt_poll_group_000", 00:13:24.927 "listen_address": { 00:13:24.927 "trtype": "TCP", 00:13:24.927 "adrfam": "IPv4", 00:13:24.927 "traddr": "10.0.0.2", 00:13:24.927 "trsvcid": "4420" 00:13:24.927 }, 00:13:24.927 "peer_address": { 00:13:24.927 "trtype": "TCP", 00:13:24.927 "adrfam": "IPv4", 00:13:24.927 "traddr": "10.0.0.1", 00:13:24.927 "trsvcid": "45304" 00:13:24.927 }, 00:13:24.927 "auth": { 00:13:24.927 "state": "completed", 00:13:24.927 "digest": "sha256", 00:13:24.927 "dhgroup": "null" 00:13:24.927 } 00:13:24.927 } 00:13:24.927 ]' 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.185 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.444 23:13:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:26.378 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.636 23:13:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:26.895 00:13:26.895 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.895 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.895 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.154 { 00:13:27.154 "cntlid": 7, 00:13:27.154 "qid": 0, 00:13:27.154 "state": "enabled", 00:13:27.154 "thread": "nvmf_tgt_poll_group_000", 00:13:27.154 "listen_address": { 00:13:27.154 "trtype": "TCP", 00:13:27.154 "adrfam": "IPv4", 00:13:27.154 "traddr": "10.0.0.2", 00:13:27.154 "trsvcid": "4420" 00:13:27.154 }, 00:13:27.154 "peer_address": { 00:13:27.154 "trtype": "TCP", 00:13:27.154 "adrfam": "IPv4", 00:13:27.154 "traddr": "10.0.0.1", 00:13:27.154 "trsvcid": "45324" 00:13:27.154 }, 00:13:27.154 "auth": { 00:13:27.154 "state": "completed", 00:13:27.154 "digest": "sha256", 00:13:27.154 "dhgroup": "null" 00:13:27.154 } 00:13:27.154 } 00:13:27.154 ]' 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:27.154 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.413 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:27.413 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.413 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.413 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.413 23:13:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.723 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:28.288 23:13:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.855 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.113 00:13:29.113 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.113 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.113 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.372 { 00:13:29.372 "cntlid": 9, 00:13:29.372 "qid": 0, 00:13:29.372 "state": "enabled", 00:13:29.372 "thread": "nvmf_tgt_poll_group_000", 00:13:29.372 "listen_address": { 00:13:29.372 "trtype": "TCP", 00:13:29.372 "adrfam": "IPv4", 00:13:29.372 "traddr": "10.0.0.2", 00:13:29.372 "trsvcid": "4420" 00:13:29.372 }, 00:13:29.372 "peer_address": { 00:13:29.372 "trtype": "TCP", 00:13:29.372 "adrfam": "IPv4", 00:13:29.372 "traddr": "10.0.0.1", 00:13:29.372 "trsvcid": "58812" 00:13:29.372 }, 00:13:29.372 "auth": { 00:13:29.372 "state": "completed", 00:13:29.372 "digest": "sha256", 00:13:29.372 "dhgroup": "ffdhe2048" 00:13:29.372 } 00:13:29.372 } 00:13:29.372 ]' 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:29.372 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:29.630 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.630 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.630 23:13:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.889 23:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.485 23:13:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.778 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.035 00:13:31.035 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.035 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.035 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:31.602 { 00:13:31.602 "cntlid": 11, 00:13:31.602 "qid": 0, 00:13:31.602 "state": "enabled", 00:13:31.602 "thread": "nvmf_tgt_poll_group_000", 00:13:31.602 "listen_address": { 00:13:31.602 "trtype": "TCP", 00:13:31.602 "adrfam": "IPv4", 00:13:31.602 "traddr": "10.0.0.2", 00:13:31.602 "trsvcid": "4420" 00:13:31.602 }, 00:13:31.602 "peer_address": { 00:13:31.602 "trtype": "TCP", 00:13:31.602 "adrfam": "IPv4", 00:13:31.602 "traddr": "10.0.0.1", 00:13:31.602 "trsvcid": "58838" 00:13:31.602 }, 00:13:31.602 "auth": { 00:13:31.602 "state": "completed", 00:13:31.602 "digest": "sha256", 00:13:31.602 "dhgroup": "ffdhe2048" 00:13:31.602 } 00:13:31.602 } 00:13:31.602 ]' 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.602 23:13:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.860 23:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:13:32.425 23:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.425 23:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:32.425 23:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.425 23:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.684 23:13:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.684 23:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:32.684 23:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:32.684 23:13:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.942 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.201 00:13:33.201 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:33.201 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:33.201 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.459 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.459 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.459 23:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.459 23:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.717 23:13:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.717 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:33.717 { 00:13:33.717 "cntlid": 13, 00:13:33.717 "qid": 0, 00:13:33.717 "state": "enabled", 00:13:33.717 "thread": "nvmf_tgt_poll_group_000", 00:13:33.717 "listen_address": { 00:13:33.717 "trtype": "TCP", 00:13:33.717 "adrfam": "IPv4", 00:13:33.717 "traddr": "10.0.0.2", 00:13:33.717 "trsvcid": "4420" 00:13:33.717 }, 00:13:33.717 "peer_address": { 00:13:33.717 "trtype": "TCP", 00:13:33.717 "adrfam": "IPv4", 00:13:33.717 "traddr": "10.0.0.1", 00:13:33.717 "trsvcid": "58874" 00:13:33.717 }, 00:13:33.717 "auth": { 00:13:33.717 "state": "completed", 00:13:33.717 "digest": "sha256", 00:13:33.717 "dhgroup": "ffdhe2048" 00:13:33.717 } 00:13:33.717 } 00:13:33.717 ]' 00:13:33.717 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:33.717 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.717 23:13:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.717 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:33.717 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.717 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.717 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.717 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.976 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.544 23:13:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:34.802 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:35.369 00:13:35.369 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:35.369 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.369 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:35.627 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.627 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.628 { 00:13:35.628 "cntlid": 15, 00:13:35.628 "qid": 0, 00:13:35.628 "state": "enabled", 00:13:35.628 "thread": "nvmf_tgt_poll_group_000", 00:13:35.628 "listen_address": { 00:13:35.628 "trtype": "TCP", 00:13:35.628 "adrfam": "IPv4", 00:13:35.628 "traddr": "10.0.0.2", 00:13:35.628 "trsvcid": "4420" 00:13:35.628 }, 00:13:35.628 "peer_address": { 00:13:35.628 "trtype": "TCP", 00:13:35.628 "adrfam": "IPv4", 00:13:35.628 "traddr": "10.0.0.1", 00:13:35.628 "trsvcid": "58898" 00:13:35.628 }, 00:13:35.628 "auth": { 00:13:35.628 "state": "completed", 00:13:35.628 "digest": "sha256", 00:13:35.628 "dhgroup": "ffdhe2048" 00:13:35.628 } 00:13:35.628 } 00:13:35.628 ]' 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:35.628 23:13:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.628 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.628 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.628 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.887 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:36.453 23:13:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:36.712 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.970 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.971 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.229 00:13:37.229 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.229 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.229 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.487 { 00:13:37.487 "cntlid": 17, 00:13:37.487 "qid": 0, 00:13:37.487 "state": "enabled", 00:13:37.487 "thread": "nvmf_tgt_poll_group_000", 00:13:37.487 "listen_address": { 00:13:37.487 "trtype": "TCP", 00:13:37.487 "adrfam": "IPv4", 00:13:37.487 "traddr": "10.0.0.2", 00:13:37.487 "trsvcid": "4420" 00:13:37.487 }, 00:13:37.487 "peer_address": { 00:13:37.487 "trtype": "TCP", 00:13:37.487 "adrfam": "IPv4", 00:13:37.487 "traddr": "10.0.0.1", 00:13:37.487 "trsvcid": "58924" 00:13:37.487 }, 00:13:37.487 "auth": { 00:13:37.487 "state": "completed", 00:13:37.487 "digest": "sha256", 00:13:37.487 "dhgroup": "ffdhe3072" 00:13:37.487 } 00:13:37.487 } 00:13:37.487 ]' 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.487 23:13:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.745 23:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.678 23:14:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.678 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.937 00:13:38.937 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.937 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.937 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.195 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.195 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.195 23:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.195 23:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.195 23:14:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.195 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.195 { 00:13:39.195 "cntlid": 19, 00:13:39.195 "qid": 0, 00:13:39.195 "state": "enabled", 00:13:39.195 "thread": "nvmf_tgt_poll_group_000", 00:13:39.195 "listen_address": { 00:13:39.195 "trtype": "TCP", 00:13:39.195 "adrfam": "IPv4", 00:13:39.195 "traddr": "10.0.0.2", 00:13:39.195 "trsvcid": "4420" 00:13:39.195 }, 00:13:39.195 "peer_address": { 00:13:39.195 "trtype": "TCP", 00:13:39.195 "adrfam": "IPv4", 00:13:39.195 "traddr": "10.0.0.1", 00:13:39.196 "trsvcid": "54958" 00:13:39.196 }, 00:13:39.196 "auth": { 00:13:39.196 "state": "completed", 00:13:39.196 "digest": "sha256", 00:13:39.196 "dhgroup": "ffdhe3072" 00:13:39.196 } 00:13:39.196 } 00:13:39.196 ]' 00:13:39.196 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.454 23:14:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.712 23:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:13:40.279 23:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.279 23:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:40.279 23:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.279 23:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.279 23:14:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.279 23:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.538 23:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:40.538 23:14:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.796 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.104 00:13:41.104 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.104 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.104 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.362 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.362 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.362 23:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.362 23:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.362 23:14:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.362 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.362 { 00:13:41.362 "cntlid": 21, 00:13:41.362 "qid": 0, 00:13:41.362 "state": "enabled", 00:13:41.362 "thread": "nvmf_tgt_poll_group_000", 00:13:41.363 "listen_address": { 00:13:41.363 "trtype": "TCP", 00:13:41.363 "adrfam": "IPv4", 00:13:41.363 "traddr": "10.0.0.2", 00:13:41.363 "trsvcid": "4420" 00:13:41.363 }, 00:13:41.363 "peer_address": { 00:13:41.363 "trtype": "TCP", 00:13:41.363 "adrfam": "IPv4", 00:13:41.363 "traddr": "10.0.0.1", 00:13:41.363 "trsvcid": "54986" 00:13:41.363 }, 00:13:41.363 "auth": { 00:13:41.363 "state": "completed", 00:13:41.363 "digest": "sha256", 00:13:41.363 "dhgroup": "ffdhe3072" 00:13:41.363 } 00:13:41.363 } 00:13:41.363 ]' 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.363 23:14:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.929 23:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:42.494 23:14:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:42.752 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:43.011 00:13:43.011 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.011 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.011 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.269 { 00:13:43.269 "cntlid": 23, 00:13:43.269 "qid": 0, 00:13:43.269 "state": "enabled", 00:13:43.269 "thread": "nvmf_tgt_poll_group_000", 00:13:43.269 "listen_address": { 00:13:43.269 "trtype": "TCP", 00:13:43.269 "adrfam": "IPv4", 00:13:43.269 "traddr": "10.0.0.2", 00:13:43.269 "trsvcid": "4420" 00:13:43.269 }, 00:13:43.269 "peer_address": { 00:13:43.269 "trtype": "TCP", 00:13:43.269 "adrfam": "IPv4", 00:13:43.269 "traddr": "10.0.0.1", 00:13:43.269 "trsvcid": "55022" 00:13:43.269 }, 00:13:43.269 "auth": { 00:13:43.269 "state": "completed", 00:13:43.269 "digest": "sha256", 00:13:43.269 "dhgroup": "ffdhe3072" 00:13:43.269 } 00:13:43.269 } 00:13:43.269 ]' 00:13:43.269 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.527 23:14:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.785 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:13:44.351 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.351 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:44.351 23:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.351 23:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.610 23:14:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.610 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.610 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.610 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.610 23:14:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.940 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.199 00:13:45.199 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.199 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.199 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.457 { 00:13:45.457 "cntlid": 25, 00:13:45.457 "qid": 0, 00:13:45.457 "state": "enabled", 00:13:45.457 "thread": "nvmf_tgt_poll_group_000", 00:13:45.457 "listen_address": { 00:13:45.457 "trtype": "TCP", 00:13:45.457 "adrfam": "IPv4", 00:13:45.457 "traddr": "10.0.0.2", 00:13:45.457 "trsvcid": "4420" 00:13:45.457 }, 00:13:45.457 "peer_address": { 00:13:45.457 "trtype": "TCP", 00:13:45.457 "adrfam": "IPv4", 00:13:45.457 "traddr": "10.0.0.1", 00:13:45.457 "trsvcid": "55030" 00:13:45.457 }, 00:13:45.457 "auth": { 00:13:45.457 "state": "completed", 00:13:45.457 "digest": "sha256", 00:13:45.457 "dhgroup": "ffdhe4096" 00:13:45.457 } 00:13:45.457 } 00:13:45.457 ]' 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.457 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:45.716 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.716 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.716 23:14:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.974 23:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:46.540 23:14:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.798 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.365 00:13:47.365 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.365 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.365 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:47.623 { 00:13:47.623 "cntlid": 27, 00:13:47.623 "qid": 0, 00:13:47.623 "state": "enabled", 00:13:47.623 "thread": "nvmf_tgt_poll_group_000", 00:13:47.623 "listen_address": { 00:13:47.623 "trtype": "TCP", 00:13:47.623 "adrfam": "IPv4", 00:13:47.623 "traddr": "10.0.0.2", 00:13:47.623 "trsvcid": "4420" 00:13:47.623 }, 00:13:47.623 "peer_address": { 00:13:47.623 "trtype": "TCP", 00:13:47.623 "adrfam": "IPv4", 00:13:47.623 "traddr": "10.0.0.1", 00:13:47.623 "trsvcid": "55064" 00:13:47.623 }, 00:13:47.623 "auth": { 00:13:47.623 "state": "completed", 00:13:47.623 "digest": "sha256", 00:13:47.623 "dhgroup": "ffdhe4096" 00:13:47.623 } 00:13:47.623 } 00:13:47.623 ]' 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:47.623 23:14:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.623 23:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.623 23:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.623 23:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.881 23:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:13:48.816 23:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.816 23:14:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:48.816 23:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.816 23:14:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.816 23:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.102 23:14:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.102 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.102 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.360 00:13:49.360 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:49.360 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:49.360 23:14:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.619 { 00:13:49.619 "cntlid": 29, 00:13:49.619 "qid": 0, 00:13:49.619 "state": "enabled", 00:13:49.619 "thread": "nvmf_tgt_poll_group_000", 00:13:49.619 "listen_address": { 00:13:49.619 "trtype": "TCP", 00:13:49.619 "adrfam": "IPv4", 00:13:49.619 "traddr": "10.0.0.2", 00:13:49.619 "trsvcid": "4420" 00:13:49.619 }, 00:13:49.619 "peer_address": { 00:13:49.619 "trtype": "TCP", 00:13:49.619 "adrfam": "IPv4", 00:13:49.619 "traddr": "10.0.0.1", 00:13:49.619 "trsvcid": "58486" 00:13:49.619 }, 00:13:49.619 "auth": { 00:13:49.619 "state": "completed", 00:13:49.619 "digest": "sha256", 00:13:49.619 "dhgroup": "ffdhe4096" 00:13:49.619 } 00:13:49.619 } 00:13:49.619 ]' 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.619 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.877 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.877 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.877 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.877 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.877 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.136 23:14:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:13:50.702 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.702 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:50.702 23:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.702 23:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.703 23:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.703 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.703 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:50.703 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:50.960 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:50.961 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:51.538 00:13:51.538 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.538 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.538 23:14:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.796 { 00:13:51.796 "cntlid": 31, 00:13:51.796 "qid": 0, 00:13:51.796 "state": "enabled", 00:13:51.796 "thread": "nvmf_tgt_poll_group_000", 00:13:51.796 "listen_address": { 00:13:51.796 "trtype": "TCP", 00:13:51.796 "adrfam": "IPv4", 00:13:51.796 "traddr": "10.0.0.2", 00:13:51.796 "trsvcid": "4420" 00:13:51.796 }, 00:13:51.796 "peer_address": { 00:13:51.796 "trtype": "TCP", 00:13:51.796 "adrfam": "IPv4", 00:13:51.796 "traddr": "10.0.0.1", 00:13:51.796 "trsvcid": "58520" 00:13:51.796 }, 00:13:51.796 "auth": { 00:13:51.796 "state": "completed", 00:13:51.796 "digest": "sha256", 00:13:51.796 "dhgroup": "ffdhe4096" 00:13:51.796 } 00:13:51.796 } 00:13:51.796 ]' 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.796 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.054 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.054 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.054 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.055 23:14:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:52.991 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.557 00:13:53.557 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.557 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.557 23:14:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.815 { 00:13:53.815 "cntlid": 33, 00:13:53.815 "qid": 0, 00:13:53.815 "state": "enabled", 00:13:53.815 "thread": "nvmf_tgt_poll_group_000", 00:13:53.815 "listen_address": { 00:13:53.815 "trtype": "TCP", 00:13:53.815 "adrfam": "IPv4", 00:13:53.815 "traddr": "10.0.0.2", 00:13:53.815 "trsvcid": "4420" 00:13:53.815 }, 00:13:53.815 "peer_address": { 00:13:53.815 "trtype": "TCP", 00:13:53.815 "adrfam": "IPv4", 00:13:53.815 "traddr": "10.0.0.1", 00:13:53.815 "trsvcid": "58564" 00:13:53.815 }, 00:13:53.815 "auth": { 00:13:53.815 "state": "completed", 00:13:53.815 "digest": "sha256", 00:13:53.815 "dhgroup": "ffdhe6144" 00:13:53.815 } 00:13:53.815 } 00:13:53.815 ]' 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.815 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.073 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.073 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.073 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.073 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.073 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.332 23:14:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.899 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.159 23:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.418 23:14:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.418 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.418 23:14:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.676 00:13:55.676 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.676 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.676 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.935 { 00:13:55.935 "cntlid": 35, 00:13:55.935 "qid": 0, 00:13:55.935 "state": "enabled", 00:13:55.935 "thread": "nvmf_tgt_poll_group_000", 00:13:55.935 "listen_address": { 00:13:55.935 "trtype": "TCP", 00:13:55.935 "adrfam": "IPv4", 00:13:55.935 "traddr": "10.0.0.2", 00:13:55.935 "trsvcid": "4420" 00:13:55.935 }, 00:13:55.935 "peer_address": { 00:13:55.935 "trtype": "TCP", 00:13:55.935 "adrfam": "IPv4", 00:13:55.935 "traddr": "10.0.0.1", 00:13:55.935 "trsvcid": "58598" 00:13:55.935 }, 00:13:55.935 "auth": { 00:13:55.935 "state": "completed", 00:13:55.935 "digest": "sha256", 00:13:55.935 "dhgroup": "ffdhe6144" 00:13:55.935 } 00:13:55.935 } 00:13:55.935 ]' 00:13:55.935 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.194 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.453 23:14:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:13:57.020 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.020 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:57.020 23:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.020 23:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.278 23:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.278 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:57.278 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:57.278 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.537 23:14:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:57.860 00:13:57.860 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:57.860 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.860 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.134 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.134 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.134 23:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.134 23:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.134 23:14:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.134 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.134 { 00:13:58.134 "cntlid": 37, 00:13:58.134 "qid": 0, 00:13:58.134 "state": "enabled", 00:13:58.134 "thread": "nvmf_tgt_poll_group_000", 00:13:58.134 "listen_address": { 00:13:58.134 "trtype": "TCP", 00:13:58.134 "adrfam": "IPv4", 00:13:58.134 "traddr": "10.0.0.2", 00:13:58.134 "trsvcid": "4420" 00:13:58.134 }, 00:13:58.134 "peer_address": { 00:13:58.134 "trtype": "TCP", 00:13:58.134 "adrfam": "IPv4", 00:13:58.134 "traddr": "10.0.0.1", 00:13:58.134 "trsvcid": "58628" 00:13:58.134 }, 00:13:58.134 "auth": { 00:13:58.134 "state": "completed", 00:13:58.134 "digest": "sha256", 00:13:58.134 "dhgroup": "ffdhe6144" 00:13:58.134 } 00:13:58.134 } 00:13:58.134 ]' 00:13:58.135 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.135 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.135 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:58.394 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:58.394 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:58.394 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.394 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.394 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.654 23:14:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:13:59.221 23:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:59.479 23:14:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.738 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:00.305 00:14:00.305 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:00.305 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:00.305 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:00.564 { 00:14:00.564 "cntlid": 39, 00:14:00.564 "qid": 0, 00:14:00.564 "state": "enabled", 00:14:00.564 "thread": "nvmf_tgt_poll_group_000", 00:14:00.564 "listen_address": { 00:14:00.564 "trtype": "TCP", 00:14:00.564 "adrfam": "IPv4", 00:14:00.564 "traddr": "10.0.0.2", 00:14:00.564 "trsvcid": "4420" 00:14:00.564 }, 00:14:00.564 "peer_address": { 00:14:00.564 "trtype": "TCP", 00:14:00.564 "adrfam": "IPv4", 00:14:00.564 "traddr": "10.0.0.1", 00:14:00.564 "trsvcid": "34886" 00:14:00.564 }, 00:14:00.564 "auth": { 00:14:00.564 "state": "completed", 00:14:00.564 "digest": "sha256", 00:14:00.564 "dhgroup": "ffdhe6144" 00:14:00.564 } 00:14:00.564 } 00:14:00.564 ]' 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.564 23:14:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.822 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.759 23:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.759 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.694 00:14:02.694 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:02.694 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.694 23:14:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.953 { 00:14:02.953 "cntlid": 41, 00:14:02.953 "qid": 0, 00:14:02.953 "state": "enabled", 00:14:02.953 "thread": "nvmf_tgt_poll_group_000", 00:14:02.953 "listen_address": { 00:14:02.953 "trtype": "TCP", 00:14:02.953 "adrfam": "IPv4", 00:14:02.953 "traddr": "10.0.0.2", 00:14:02.953 "trsvcid": "4420" 00:14:02.953 }, 00:14:02.953 "peer_address": { 00:14:02.953 "trtype": "TCP", 00:14:02.953 "adrfam": "IPv4", 00:14:02.953 "traddr": "10.0.0.1", 00:14:02.953 "trsvcid": "34918" 00:14:02.953 }, 00:14:02.953 "auth": { 00:14:02.953 "state": "completed", 00:14:02.953 "digest": "sha256", 00:14:02.953 "dhgroup": "ffdhe8192" 00:14:02.953 } 00:14:02.953 } 00:14:02.953 ]' 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.953 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.212 23:14:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.148 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.407 23:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.974 00:14:04.974 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.974 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.974 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.233 { 00:14:05.233 "cntlid": 43, 00:14:05.233 "qid": 0, 00:14:05.233 "state": "enabled", 00:14:05.233 "thread": "nvmf_tgt_poll_group_000", 00:14:05.233 "listen_address": { 00:14:05.233 "trtype": "TCP", 00:14:05.233 "adrfam": "IPv4", 00:14:05.233 "traddr": "10.0.0.2", 00:14:05.233 "trsvcid": "4420" 00:14:05.233 }, 00:14:05.233 "peer_address": { 00:14:05.233 "trtype": "TCP", 00:14:05.233 "adrfam": "IPv4", 00:14:05.233 "traddr": "10.0.0.1", 00:14:05.233 "trsvcid": "34946" 00:14:05.233 }, 00:14:05.233 "auth": { 00:14:05.233 "state": "completed", 00:14:05.233 "digest": "sha256", 00:14:05.233 "dhgroup": "ffdhe8192" 00:14:05.233 } 00:14:05.233 } 00:14:05.233 ]' 00:14:05.233 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.492 23:14:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.751 23:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.713 23:14:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.713 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.280 00:14:07.538 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.538 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.538 23:14:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.796 { 00:14:07.796 "cntlid": 45, 00:14:07.796 "qid": 0, 00:14:07.796 "state": "enabled", 00:14:07.796 "thread": "nvmf_tgt_poll_group_000", 00:14:07.796 "listen_address": { 00:14:07.796 "trtype": "TCP", 00:14:07.796 "adrfam": "IPv4", 00:14:07.796 "traddr": "10.0.0.2", 00:14:07.796 "trsvcid": "4420" 00:14:07.796 }, 00:14:07.796 "peer_address": { 00:14:07.796 "trtype": "TCP", 00:14:07.796 "adrfam": "IPv4", 00:14:07.796 "traddr": "10.0.0.1", 00:14:07.796 "trsvcid": "34980" 00:14:07.796 }, 00:14:07.796 "auth": { 00:14:07.796 "state": "completed", 00:14:07.796 "digest": "sha256", 00:14:07.796 "dhgroup": "ffdhe8192" 00:14:07.796 } 00:14:07.796 } 00:14:07.796 ]' 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.796 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.797 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.797 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.362 23:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:08.928 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.187 23:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:10.122 00:14:10.122 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:10.122 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.122 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.381 { 00:14:10.381 "cntlid": 47, 00:14:10.381 "qid": 0, 00:14:10.381 "state": "enabled", 00:14:10.381 "thread": "nvmf_tgt_poll_group_000", 00:14:10.381 "listen_address": { 00:14:10.381 "trtype": "TCP", 00:14:10.381 "adrfam": "IPv4", 00:14:10.381 "traddr": "10.0.0.2", 00:14:10.381 "trsvcid": "4420" 00:14:10.381 }, 00:14:10.381 "peer_address": { 00:14:10.381 "trtype": "TCP", 00:14:10.381 "adrfam": "IPv4", 00:14:10.381 "traddr": "10.0.0.1", 00:14:10.381 "trsvcid": "37810" 00:14:10.381 }, 00:14:10.381 "auth": { 00:14:10.381 "state": "completed", 00:14:10.381 "digest": "sha256", 00:14:10.381 "dhgroup": "ffdhe8192" 00:14:10.381 } 00:14:10.381 } 00:14:10.381 ]' 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.381 23:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.949 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:11.513 23:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.771 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.029 00:14:12.029 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.029 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.029 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.288 { 00:14:12.288 "cntlid": 49, 00:14:12.288 "qid": 0, 00:14:12.288 "state": "enabled", 00:14:12.288 "thread": "nvmf_tgt_poll_group_000", 00:14:12.288 "listen_address": { 00:14:12.288 "trtype": "TCP", 00:14:12.288 "adrfam": "IPv4", 00:14:12.288 "traddr": "10.0.0.2", 00:14:12.288 "trsvcid": "4420" 00:14:12.288 }, 00:14:12.288 "peer_address": { 00:14:12.288 "trtype": "TCP", 00:14:12.288 "adrfam": "IPv4", 00:14:12.288 "traddr": "10.0.0.1", 00:14:12.288 "trsvcid": "37828" 00:14:12.288 }, 00:14:12.288 "auth": { 00:14:12.288 "state": "completed", 00:14:12.288 "digest": "sha384", 00:14:12.288 "dhgroup": "null" 00:14:12.288 } 00:14:12.288 } 00:14:12.288 ]' 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:12.288 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.547 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.547 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.547 23:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.806 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:13.372 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.631 23:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.889 00:14:13.889 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.889 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.889 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.147 { 00:14:14.147 "cntlid": 51, 00:14:14.147 "qid": 0, 00:14:14.147 "state": "enabled", 00:14:14.147 "thread": "nvmf_tgt_poll_group_000", 00:14:14.147 "listen_address": { 00:14:14.147 "trtype": "TCP", 00:14:14.147 "adrfam": "IPv4", 00:14:14.147 "traddr": "10.0.0.2", 00:14:14.147 "trsvcid": "4420" 00:14:14.147 }, 00:14:14.147 "peer_address": { 00:14:14.147 "trtype": "TCP", 00:14:14.147 "adrfam": "IPv4", 00:14:14.147 "traddr": "10.0.0.1", 00:14:14.147 "trsvcid": "37848" 00:14:14.147 }, 00:14:14.147 "auth": { 00:14:14.147 "state": "completed", 00:14:14.147 "digest": "sha384", 00:14:14.147 "dhgroup": "null" 00:14:14.147 } 00:14:14.147 } 00:14:14.147 ]' 00:14:14.147 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.449 23:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.728 23:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.296 23:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.861 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.119 00:14:16.119 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.119 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.119 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.378 { 00:14:16.378 "cntlid": 53, 00:14:16.378 "qid": 0, 00:14:16.378 "state": "enabled", 00:14:16.378 "thread": "nvmf_tgt_poll_group_000", 00:14:16.378 "listen_address": { 00:14:16.378 "trtype": "TCP", 00:14:16.378 "adrfam": "IPv4", 00:14:16.378 "traddr": "10.0.0.2", 00:14:16.378 "trsvcid": "4420" 00:14:16.378 }, 00:14:16.378 "peer_address": { 00:14:16.378 "trtype": "TCP", 00:14:16.378 "adrfam": "IPv4", 00:14:16.378 "traddr": "10.0.0.1", 00:14:16.378 "trsvcid": "37874" 00:14:16.378 }, 00:14:16.378 "auth": { 00:14:16.378 "state": "completed", 00:14:16.378 "digest": "sha384", 00:14:16.378 "dhgroup": "null" 00:14:16.378 } 00:14:16.378 } 00:14:16.378 ]' 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.378 23:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.636 23:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:17.569 23:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:17.883 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:18.142 00:14:18.142 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.142 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.142 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.400 { 00:14:18.400 "cntlid": 55, 00:14:18.400 "qid": 0, 00:14:18.400 "state": "enabled", 00:14:18.400 "thread": "nvmf_tgt_poll_group_000", 00:14:18.400 "listen_address": { 00:14:18.400 "trtype": "TCP", 00:14:18.400 "adrfam": "IPv4", 00:14:18.400 "traddr": "10.0.0.2", 00:14:18.400 "trsvcid": "4420" 00:14:18.400 }, 00:14:18.400 "peer_address": { 00:14:18.400 "trtype": "TCP", 00:14:18.400 "adrfam": "IPv4", 00:14:18.400 "traddr": "10.0.0.1", 00:14:18.400 "trsvcid": "37896" 00:14:18.400 }, 00:14:18.400 "auth": { 00:14:18.400 "state": "completed", 00:14:18.400 "digest": "sha384", 00:14:18.400 "dhgroup": "null" 00:14:18.400 } 00:14:18.400 } 00:14:18.400 ]' 00:14:18.400 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.658 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.658 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.658 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:18.658 23:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.658 23:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.658 23:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.658 23:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.916 23:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:19.515 23:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.774 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.340 00:14:20.340 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.340 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.340 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.599 { 00:14:20.599 "cntlid": 57, 00:14:20.599 "qid": 0, 00:14:20.599 "state": "enabled", 00:14:20.599 "thread": "nvmf_tgt_poll_group_000", 00:14:20.599 "listen_address": { 00:14:20.599 "trtype": "TCP", 00:14:20.599 "adrfam": "IPv4", 00:14:20.599 "traddr": "10.0.0.2", 00:14:20.599 "trsvcid": "4420" 00:14:20.599 }, 00:14:20.599 "peer_address": { 00:14:20.599 "trtype": "TCP", 00:14:20.599 "adrfam": "IPv4", 00:14:20.599 "traddr": "10.0.0.1", 00:14:20.599 "trsvcid": "49546" 00:14:20.599 }, 00:14:20.599 "auth": { 00:14:20.599 "state": "completed", 00:14:20.599 "digest": "sha384", 00:14:20.599 "dhgroup": "ffdhe2048" 00:14:20.599 } 00:14:20.599 } 00:14:20.599 ]' 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:20.599 23:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.599 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.599 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.599 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.858 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:21.791 23:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.791 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.357 00:14:22.357 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.357 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.357 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.616 { 00:14:22.616 "cntlid": 59, 00:14:22.616 "qid": 0, 00:14:22.616 "state": "enabled", 00:14:22.616 "thread": "nvmf_tgt_poll_group_000", 00:14:22.616 "listen_address": { 00:14:22.616 "trtype": "TCP", 00:14:22.616 "adrfam": "IPv4", 00:14:22.616 "traddr": "10.0.0.2", 00:14:22.616 "trsvcid": "4420" 00:14:22.616 }, 00:14:22.616 "peer_address": { 00:14:22.616 "trtype": "TCP", 00:14:22.616 "adrfam": "IPv4", 00:14:22.616 "traddr": "10.0.0.1", 00:14:22.616 "trsvcid": "49582" 00:14:22.616 }, 00:14:22.616 "auth": { 00:14:22.616 "state": "completed", 00:14:22.616 "digest": "sha384", 00:14:22.616 "dhgroup": "ffdhe2048" 00:14:22.616 } 00:14:22.616 } 00:14:22.616 ]' 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.616 23:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.616 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.616 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.616 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.874 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:23.810 23:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.810 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:24.377 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.377 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.377 { 00:14:24.377 "cntlid": 61, 00:14:24.377 "qid": 0, 00:14:24.377 "state": "enabled", 00:14:24.377 "thread": "nvmf_tgt_poll_group_000", 00:14:24.377 "listen_address": { 00:14:24.378 "trtype": "TCP", 00:14:24.378 "adrfam": "IPv4", 00:14:24.378 "traddr": "10.0.0.2", 00:14:24.378 "trsvcid": "4420" 00:14:24.378 }, 00:14:24.378 "peer_address": { 00:14:24.378 "trtype": "TCP", 00:14:24.378 "adrfam": "IPv4", 00:14:24.378 "traddr": "10.0.0.1", 00:14:24.378 "trsvcid": "49606" 00:14:24.378 }, 00:14:24.378 "auth": { 00:14:24.378 "state": "completed", 00:14:24.378 "digest": "sha384", 00:14:24.378 "dhgroup": "ffdhe2048" 00:14:24.378 } 00:14:24.378 } 00:14:24.378 ]' 00:14:24.378 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.647 23:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.916 23:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.486 23:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.745 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:26.004 00:14:26.004 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.004 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.004 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.571 { 00:14:26.571 "cntlid": 63, 00:14:26.571 "qid": 0, 00:14:26.571 "state": "enabled", 00:14:26.571 "thread": "nvmf_tgt_poll_group_000", 00:14:26.571 "listen_address": { 00:14:26.571 "trtype": "TCP", 00:14:26.571 "adrfam": "IPv4", 00:14:26.571 "traddr": "10.0.0.2", 00:14:26.571 "trsvcid": "4420" 00:14:26.571 }, 00:14:26.571 "peer_address": { 00:14:26.571 "trtype": "TCP", 00:14:26.571 "adrfam": "IPv4", 00:14:26.571 "traddr": "10.0.0.1", 00:14:26.571 "trsvcid": "49634" 00:14:26.571 }, 00:14:26.571 "auth": { 00:14:26.571 "state": "completed", 00:14:26.571 "digest": "sha384", 00:14:26.571 "dhgroup": "ffdhe2048" 00:14:26.571 } 00:14:26.571 } 00:14:26.571 ]' 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.571 23:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.830 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:27.397 23:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.965 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.228 00:14:28.228 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.228 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.228 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.496 { 00:14:28.496 "cntlid": 65, 00:14:28.496 "qid": 0, 00:14:28.496 "state": "enabled", 00:14:28.496 "thread": "nvmf_tgt_poll_group_000", 00:14:28.496 "listen_address": { 00:14:28.496 "trtype": "TCP", 00:14:28.496 "adrfam": "IPv4", 00:14:28.496 "traddr": "10.0.0.2", 00:14:28.496 "trsvcid": "4420" 00:14:28.496 }, 00:14:28.496 "peer_address": { 00:14:28.496 "trtype": "TCP", 00:14:28.496 "adrfam": "IPv4", 00:14:28.496 "traddr": "10.0.0.1", 00:14:28.496 "trsvcid": "49666" 00:14:28.496 }, 00:14:28.496 "auth": { 00:14:28.496 "state": "completed", 00:14:28.496 "digest": "sha384", 00:14:28.496 "dhgroup": "ffdhe3072" 00:14:28.496 } 00:14:28.496 } 00:14:28.496 ]' 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.496 23:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.063 23:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.630 23:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.889 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.148 00:14:30.148 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.148 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.148 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.715 { 00:14:30.715 "cntlid": 67, 00:14:30.715 "qid": 0, 00:14:30.715 "state": "enabled", 00:14:30.715 "thread": "nvmf_tgt_poll_group_000", 00:14:30.715 "listen_address": { 00:14:30.715 "trtype": "TCP", 00:14:30.715 "adrfam": "IPv4", 00:14:30.715 "traddr": "10.0.0.2", 00:14:30.715 "trsvcid": "4420" 00:14:30.715 }, 00:14:30.715 "peer_address": { 00:14:30.715 "trtype": "TCP", 00:14:30.715 "adrfam": "IPv4", 00:14:30.715 "traddr": "10.0.0.1", 00:14:30.715 "trsvcid": "43122" 00:14:30.715 }, 00:14:30.715 "auth": { 00:14:30.715 "state": "completed", 00:14:30.715 "digest": "sha384", 00:14:30.715 "dhgroup": "ffdhe3072" 00:14:30.715 } 00:14:30.715 } 00:14:30.715 ]' 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:30.715 23:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.715 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.715 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.715 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.715 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.715 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.973 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.540 23:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.799 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.375 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.375 23:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.679 { 00:14:32.679 "cntlid": 69, 00:14:32.679 "qid": 0, 00:14:32.679 "state": "enabled", 00:14:32.679 "thread": "nvmf_tgt_poll_group_000", 00:14:32.679 "listen_address": { 00:14:32.679 "trtype": "TCP", 00:14:32.679 "adrfam": "IPv4", 00:14:32.679 "traddr": "10.0.0.2", 00:14:32.679 "trsvcid": "4420" 00:14:32.679 }, 00:14:32.679 "peer_address": { 00:14:32.679 "trtype": "TCP", 00:14:32.679 "adrfam": "IPv4", 00:14:32.679 "traddr": "10.0.0.1", 00:14:32.679 "trsvcid": "43134" 00:14:32.679 }, 00:14:32.679 "auth": { 00:14:32.679 "state": "completed", 00:14:32.679 "digest": "sha384", 00:14:32.679 "dhgroup": "ffdhe3072" 00:14:32.679 } 00:14:32.679 } 00:14:32.679 ]' 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.679 23:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.938 23:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:33.505 23:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:33.764 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:34.331 00:14:34.331 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.331 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.331 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.590 { 00:14:34.590 "cntlid": 71, 00:14:34.590 "qid": 0, 00:14:34.590 "state": "enabled", 00:14:34.590 "thread": "nvmf_tgt_poll_group_000", 00:14:34.590 "listen_address": { 00:14:34.590 "trtype": "TCP", 00:14:34.590 "adrfam": "IPv4", 00:14:34.590 "traddr": "10.0.0.2", 00:14:34.590 "trsvcid": "4420" 00:14:34.590 }, 00:14:34.590 "peer_address": { 00:14:34.590 "trtype": "TCP", 00:14:34.590 "adrfam": "IPv4", 00:14:34.590 "traddr": "10.0.0.1", 00:14:34.590 "trsvcid": "43166" 00:14:34.590 }, 00:14:34.590 "auth": { 00:14:34.590 "state": "completed", 00:14:34.590 "digest": "sha384", 00:14:34.590 "dhgroup": "ffdhe3072" 00:14:34.590 } 00:14:34.590 } 00:14:34.590 ]' 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:34.590 23:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.590 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.590 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.590 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.849 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:35.785 23:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.044 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.304 00:14:36.304 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.304 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.304 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.563 { 00:14:36.563 "cntlid": 73, 00:14:36.563 "qid": 0, 00:14:36.563 "state": "enabled", 00:14:36.563 "thread": "nvmf_tgt_poll_group_000", 00:14:36.563 "listen_address": { 00:14:36.563 "trtype": "TCP", 00:14:36.563 "adrfam": "IPv4", 00:14:36.563 "traddr": "10.0.0.2", 00:14:36.563 "trsvcid": "4420" 00:14:36.563 }, 00:14:36.563 "peer_address": { 00:14:36.563 "trtype": "TCP", 00:14:36.563 "adrfam": "IPv4", 00:14:36.563 "traddr": "10.0.0.1", 00:14:36.563 "trsvcid": "43208" 00:14:36.563 }, 00:14:36.563 "auth": { 00:14:36.563 "state": "completed", 00:14:36.563 "digest": "sha384", 00:14:36.563 "dhgroup": "ffdhe4096" 00:14:36.563 } 00:14:36.563 } 00:14:36.563 ]' 00:14:36.563 23:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.563 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:36.563 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.821 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:36.821 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.821 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.821 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.821 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.080 23:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:37.647 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.906 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.474 00:14:38.474 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.474 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.474 23:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.733 { 00:14:38.733 "cntlid": 75, 00:14:38.733 "qid": 0, 00:14:38.733 "state": "enabled", 00:14:38.733 "thread": "nvmf_tgt_poll_group_000", 00:14:38.733 "listen_address": { 00:14:38.733 "trtype": "TCP", 00:14:38.733 "adrfam": "IPv4", 00:14:38.733 "traddr": "10.0.0.2", 00:14:38.733 "trsvcid": "4420" 00:14:38.733 }, 00:14:38.733 "peer_address": { 00:14:38.733 "trtype": "TCP", 00:14:38.733 "adrfam": "IPv4", 00:14:38.733 "traddr": "10.0.0.1", 00:14:38.733 "trsvcid": "43240" 00:14:38.733 }, 00:14:38.733 "auth": { 00:14:38.733 "state": "completed", 00:14:38.733 "digest": "sha384", 00:14:38.733 "dhgroup": "ffdhe4096" 00:14:38.733 } 00:14:38.733 } 00:14:38.733 ]' 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.733 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.991 23:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:39.941 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.200 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.459 00:14:40.459 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.459 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.459 23:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.718 { 00:14:40.718 "cntlid": 77, 00:14:40.718 "qid": 0, 00:14:40.718 "state": "enabled", 00:14:40.718 "thread": "nvmf_tgt_poll_group_000", 00:14:40.718 "listen_address": { 00:14:40.718 "trtype": "TCP", 00:14:40.718 "adrfam": "IPv4", 00:14:40.718 "traddr": "10.0.0.2", 00:14:40.718 "trsvcid": "4420" 00:14:40.718 }, 00:14:40.718 "peer_address": { 00:14:40.718 "trtype": "TCP", 00:14:40.718 "adrfam": "IPv4", 00:14:40.718 "traddr": "10.0.0.1", 00:14:40.718 "trsvcid": "34684" 00:14:40.718 }, 00:14:40.718 "auth": { 00:14:40.718 "state": "completed", 00:14:40.718 "digest": "sha384", 00:14:40.718 "dhgroup": "ffdhe4096" 00:14:40.718 } 00:14:40.718 } 00:14:40.718 ]' 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.718 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.977 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.977 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.977 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.977 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.977 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.236 23:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:41.802 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.061 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.628 00:14:42.628 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.628 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.628 23:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.887 { 00:14:42.887 "cntlid": 79, 00:14:42.887 "qid": 0, 00:14:42.887 "state": "enabled", 00:14:42.887 "thread": "nvmf_tgt_poll_group_000", 00:14:42.887 "listen_address": { 00:14:42.887 "trtype": "TCP", 00:14:42.887 "adrfam": "IPv4", 00:14:42.887 "traddr": "10.0.0.2", 00:14:42.887 "trsvcid": "4420" 00:14:42.887 }, 00:14:42.887 "peer_address": { 00:14:42.887 "trtype": "TCP", 00:14:42.887 "adrfam": "IPv4", 00:14:42.887 "traddr": "10.0.0.1", 00:14:42.887 "trsvcid": "34718" 00:14:42.887 }, 00:14:42.887 "auth": { 00:14:42.887 "state": "completed", 00:14:42.887 "digest": "sha384", 00:14:42.887 "dhgroup": "ffdhe4096" 00:14:42.887 } 00:14:42.887 } 00:14:42.887 ]' 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.887 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.146 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.146 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.146 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.146 23:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:44.156 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:44.157 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.415 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.416 23:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.416 23:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.416 23:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.416 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.416 23:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.982 00:14:44.982 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.982 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.982 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.241 { 00:14:45.241 "cntlid": 81, 00:14:45.241 "qid": 0, 00:14:45.241 "state": "enabled", 00:14:45.241 "thread": "nvmf_tgt_poll_group_000", 00:14:45.241 "listen_address": { 00:14:45.241 "trtype": "TCP", 00:14:45.241 "adrfam": "IPv4", 00:14:45.241 "traddr": "10.0.0.2", 00:14:45.241 "trsvcid": "4420" 00:14:45.241 }, 00:14:45.241 "peer_address": { 00:14:45.241 "trtype": "TCP", 00:14:45.241 "adrfam": "IPv4", 00:14:45.241 "traddr": "10.0.0.1", 00:14:45.241 "trsvcid": "34728" 00:14:45.241 }, 00:14:45.241 "auth": { 00:14:45.241 "state": "completed", 00:14:45.241 "digest": "sha384", 00:14:45.241 "dhgroup": "ffdhe6144" 00:14:45.241 } 00:14:45.241 } 00:14:45.241 ]' 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.241 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.499 23:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:46.432 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.690 23:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.948 00:14:46.948 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.948 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.948 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.206 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.206 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.206 23:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.206 23:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.464 23:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.464 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.464 { 00:14:47.464 "cntlid": 83, 00:14:47.464 "qid": 0, 00:14:47.464 "state": "enabled", 00:14:47.464 "thread": "nvmf_tgt_poll_group_000", 00:14:47.464 "listen_address": { 00:14:47.464 "trtype": "TCP", 00:14:47.464 "adrfam": "IPv4", 00:14:47.464 "traddr": "10.0.0.2", 00:14:47.464 "trsvcid": "4420" 00:14:47.464 }, 00:14:47.465 "peer_address": { 00:14:47.465 "trtype": "TCP", 00:14:47.465 "adrfam": "IPv4", 00:14:47.465 "traddr": "10.0.0.1", 00:14:47.465 "trsvcid": "34736" 00:14:47.465 }, 00:14:47.465 "auth": { 00:14:47.465 "state": "completed", 00:14:47.465 "digest": "sha384", 00:14:47.465 "dhgroup": "ffdhe6144" 00:14:47.465 } 00:14:47.465 } 00:14:47.465 ]' 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.465 23:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.723 23:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:48.290 23:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.549 23:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.550 23:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.550 23:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.550 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.550 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.116 00:14:49.116 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.116 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.116 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.376 { 00:14:49.376 "cntlid": 85, 00:14:49.376 "qid": 0, 00:14:49.376 "state": "enabled", 00:14:49.376 "thread": "nvmf_tgt_poll_group_000", 00:14:49.376 "listen_address": { 00:14:49.376 "trtype": "TCP", 00:14:49.376 "adrfam": "IPv4", 00:14:49.376 "traddr": "10.0.0.2", 00:14:49.376 "trsvcid": "4420" 00:14:49.376 }, 00:14:49.376 "peer_address": { 00:14:49.376 "trtype": "TCP", 00:14:49.376 "adrfam": "IPv4", 00:14:49.376 "traddr": "10.0.0.1", 00:14:49.376 "trsvcid": "36644" 00:14:49.376 }, 00:14:49.376 "auth": { 00:14:49.376 "state": "completed", 00:14:49.376 "digest": "sha384", 00:14:49.376 "dhgroup": "ffdhe6144" 00:14:49.376 } 00:14:49.376 } 00:14:49.376 ]' 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.376 23:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.942 23:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.508 23:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:50.766 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:51.331 00:14:51.331 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.331 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.331 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.590 { 00:14:51.590 "cntlid": 87, 00:14:51.590 "qid": 0, 00:14:51.590 "state": "enabled", 00:14:51.590 "thread": "nvmf_tgt_poll_group_000", 00:14:51.590 "listen_address": { 00:14:51.590 "trtype": "TCP", 00:14:51.590 "adrfam": "IPv4", 00:14:51.590 "traddr": "10.0.0.2", 00:14:51.590 "trsvcid": "4420" 00:14:51.590 }, 00:14:51.590 "peer_address": { 00:14:51.590 "trtype": "TCP", 00:14:51.590 "adrfam": "IPv4", 00:14:51.590 "traddr": "10.0.0.1", 00:14:51.590 "trsvcid": "36678" 00:14:51.590 }, 00:14:51.590 "auth": { 00:14:51.590 "state": "completed", 00:14:51.590 "digest": "sha384", 00:14:51.590 "dhgroup": "ffdhe6144" 00:14:51.590 } 00:14:51.590 } 00:14:51.590 ]' 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.590 23:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.590 23:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.590 23:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.590 23:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.590 23:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.590 23:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.157 23:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.723 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.982 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.550 00:14:53.550 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.550 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.550 23:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.117 { 00:14:54.117 "cntlid": 89, 00:14:54.117 "qid": 0, 00:14:54.117 "state": "enabled", 00:14:54.117 "thread": "nvmf_tgt_poll_group_000", 00:14:54.117 "listen_address": { 00:14:54.117 "trtype": "TCP", 00:14:54.117 "adrfam": "IPv4", 00:14:54.117 "traddr": "10.0.0.2", 00:14:54.117 "trsvcid": "4420" 00:14:54.117 }, 00:14:54.117 "peer_address": { 00:14:54.117 "trtype": "TCP", 00:14:54.117 "adrfam": "IPv4", 00:14:54.117 "traddr": "10.0.0.1", 00:14:54.117 "trsvcid": "36710" 00:14:54.117 }, 00:14:54.117 "auth": { 00:14:54.117 "state": "completed", 00:14:54.117 "digest": "sha384", 00:14:54.117 "dhgroup": "ffdhe8192" 00:14:54.117 } 00:14:54.117 } 00:14:54.117 ]' 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.117 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.375 23:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:54.943 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.536 23:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.803 00:14:56.060 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.060 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.060 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.319 { 00:14:56.319 "cntlid": 91, 00:14:56.319 "qid": 0, 00:14:56.319 "state": "enabled", 00:14:56.319 "thread": "nvmf_tgt_poll_group_000", 00:14:56.319 "listen_address": { 00:14:56.319 "trtype": "TCP", 00:14:56.319 "adrfam": "IPv4", 00:14:56.319 "traddr": "10.0.0.2", 00:14:56.319 "trsvcid": "4420" 00:14:56.319 }, 00:14:56.319 "peer_address": { 00:14:56.319 "trtype": "TCP", 00:14:56.319 "adrfam": "IPv4", 00:14:56.319 "traddr": "10.0.0.1", 00:14:56.319 "trsvcid": "36740" 00:14:56.319 }, 00:14:56.319 "auth": { 00:14:56.319 "state": "completed", 00:14:56.319 "digest": "sha384", 00:14:56.319 "dhgroup": "ffdhe8192" 00:14:56.319 } 00:14:56.319 } 00:14:56.319 ]' 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.319 23:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.578 23:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:57.515 23:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.774 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.343 00:14:58.343 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.343 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.343 23:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.601 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.601 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.601 23:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.601 23:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.601 23:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.601 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.601 { 00:14:58.601 "cntlid": 93, 00:14:58.601 "qid": 0, 00:14:58.601 "state": "enabled", 00:14:58.601 "thread": "nvmf_tgt_poll_group_000", 00:14:58.601 "listen_address": { 00:14:58.601 "trtype": "TCP", 00:14:58.601 "adrfam": "IPv4", 00:14:58.601 "traddr": "10.0.0.2", 00:14:58.601 "trsvcid": "4420" 00:14:58.602 }, 00:14:58.602 "peer_address": { 00:14:58.602 "trtype": "TCP", 00:14:58.602 "adrfam": "IPv4", 00:14:58.602 "traddr": "10.0.0.1", 00:14:58.602 "trsvcid": "36764" 00:14:58.602 }, 00:14:58.602 "auth": { 00:14:58.602 "state": "completed", 00:14:58.602 "digest": "sha384", 00:14:58.602 "dhgroup": "ffdhe8192" 00:14:58.602 } 00:14:58.602 } 00:14:58.602 ]' 00:14:58.602 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.602 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.861 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.861 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.861 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.861 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.861 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.861 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.119 23:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:59.741 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:00.000 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:00.568 00:15:00.568 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.568 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.568 23:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.826 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.826 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.826 23:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.826 23:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.826 23:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.826 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.826 { 00:15:00.826 "cntlid": 95, 00:15:00.826 "qid": 0, 00:15:00.826 "state": "enabled", 00:15:00.826 "thread": "nvmf_tgt_poll_group_000", 00:15:00.826 "listen_address": { 00:15:00.826 "trtype": "TCP", 00:15:00.827 "adrfam": "IPv4", 00:15:00.827 "traddr": "10.0.0.2", 00:15:00.827 "trsvcid": "4420" 00:15:00.827 }, 00:15:00.827 "peer_address": { 00:15:00.827 "trtype": "TCP", 00:15:00.827 "adrfam": "IPv4", 00:15:00.827 "traddr": "10.0.0.1", 00:15:00.827 "trsvcid": "40998" 00:15:00.827 }, 00:15:00.827 "auth": { 00:15:00.827 "state": "completed", 00:15:00.827 "digest": "sha384", 00:15:00.827 "dhgroup": "ffdhe8192" 00:15:00.827 } 00:15:00.827 } 00:15:00.827 ]' 00:15:00.827 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.091 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.349 23:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:01.916 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.916 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:01.916 23:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.916 23:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.174 23:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.174 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:02.174 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.174 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.174 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.174 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.433 23:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.691 00:15:02.691 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.691 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.691 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.949 { 00:15:02.949 "cntlid": 97, 00:15:02.949 "qid": 0, 00:15:02.949 "state": "enabled", 00:15:02.949 "thread": "nvmf_tgt_poll_group_000", 00:15:02.949 "listen_address": { 00:15:02.949 "trtype": "TCP", 00:15:02.949 "adrfam": "IPv4", 00:15:02.949 "traddr": "10.0.0.2", 00:15:02.949 "trsvcid": "4420" 00:15:02.949 }, 00:15:02.949 "peer_address": { 00:15:02.949 "trtype": "TCP", 00:15:02.949 "adrfam": "IPv4", 00:15:02.949 "traddr": "10.0.0.1", 00:15:02.949 "trsvcid": "41030" 00:15:02.949 }, 00:15:02.949 "auth": { 00:15:02.949 "state": "completed", 00:15:02.949 "digest": "sha512", 00:15:02.949 "dhgroup": "null" 00:15:02.949 } 00:15:02.949 } 00:15:02.949 ]' 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:02.949 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.207 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.207 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.207 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.465 23:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.043 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.301 23:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.560 00:15:04.561 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.561 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.561 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.127 { 00:15:05.127 "cntlid": 99, 00:15:05.127 "qid": 0, 00:15:05.127 "state": "enabled", 00:15:05.127 "thread": "nvmf_tgt_poll_group_000", 00:15:05.127 "listen_address": { 00:15:05.127 "trtype": "TCP", 00:15:05.127 "adrfam": "IPv4", 00:15:05.127 "traddr": "10.0.0.2", 00:15:05.127 "trsvcid": "4420" 00:15:05.127 }, 00:15:05.127 "peer_address": { 00:15:05.127 "trtype": "TCP", 00:15:05.127 "adrfam": "IPv4", 00:15:05.127 "traddr": "10.0.0.1", 00:15:05.127 "trsvcid": "41058" 00:15:05.127 }, 00:15:05.127 "auth": { 00:15:05.127 "state": "completed", 00:15:05.127 "digest": "sha512", 00:15:05.127 "dhgroup": "null" 00:15:05.127 } 00:15:05.127 } 00:15:05.127 ]' 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.127 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.386 23:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:05.954 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.521 23:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.780 00:15:06.780 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.780 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.780 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.039 { 00:15:07.039 "cntlid": 101, 00:15:07.039 "qid": 0, 00:15:07.039 "state": "enabled", 00:15:07.039 "thread": "nvmf_tgt_poll_group_000", 00:15:07.039 "listen_address": { 00:15:07.039 "trtype": "TCP", 00:15:07.039 "adrfam": "IPv4", 00:15:07.039 "traddr": "10.0.0.2", 00:15:07.039 "trsvcid": "4420" 00:15:07.039 }, 00:15:07.039 "peer_address": { 00:15:07.039 "trtype": "TCP", 00:15:07.039 "adrfam": "IPv4", 00:15:07.039 "traddr": "10.0.0.1", 00:15:07.039 "trsvcid": "41096" 00:15:07.039 }, 00:15:07.039 "auth": { 00:15:07.039 "state": "completed", 00:15:07.039 "digest": "sha512", 00:15:07.039 "dhgroup": "null" 00:15:07.039 } 00:15:07.039 } 00:15:07.039 ]' 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:07.039 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.298 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.298 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.298 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.556 23:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:08.173 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.431 23:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:08.690 00:15:08.690 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.690 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.690 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.948 { 00:15:08.948 "cntlid": 103, 00:15:08.948 "qid": 0, 00:15:08.948 "state": "enabled", 00:15:08.948 "thread": "nvmf_tgt_poll_group_000", 00:15:08.948 "listen_address": { 00:15:08.948 "trtype": "TCP", 00:15:08.948 "adrfam": "IPv4", 00:15:08.948 "traddr": "10.0.0.2", 00:15:08.948 "trsvcid": "4420" 00:15:08.948 }, 00:15:08.948 "peer_address": { 00:15:08.948 "trtype": "TCP", 00:15:08.948 "adrfam": "IPv4", 00:15:08.948 "traddr": "10.0.0.1", 00:15:08.948 "trsvcid": "55718" 00:15:08.948 }, 00:15:08.948 "auth": { 00:15:08.948 "state": "completed", 00:15:08.948 "digest": "sha512", 00:15:08.948 "dhgroup": "null" 00:15:08.948 } 00:15:08.948 } 00:15:08.948 ]' 00:15:08.948 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.206 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.464 23:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:10.028 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.285 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.542 23:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.800 00:15:10.800 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.800 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.800 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.056 { 00:15:11.056 "cntlid": 105, 00:15:11.056 "qid": 0, 00:15:11.056 "state": "enabled", 00:15:11.056 "thread": "nvmf_tgt_poll_group_000", 00:15:11.056 "listen_address": { 00:15:11.056 "trtype": "TCP", 00:15:11.056 "adrfam": "IPv4", 00:15:11.056 "traddr": "10.0.0.2", 00:15:11.056 "trsvcid": "4420" 00:15:11.056 }, 00:15:11.056 "peer_address": { 00:15:11.056 "trtype": "TCP", 00:15:11.056 "adrfam": "IPv4", 00:15:11.056 "traddr": "10.0.0.1", 00:15:11.056 "trsvcid": "55734" 00:15:11.056 }, 00:15:11.056 "auth": { 00:15:11.056 "state": "completed", 00:15:11.056 "digest": "sha512", 00:15:11.056 "dhgroup": "ffdhe2048" 00:15:11.056 } 00:15:11.056 } 00:15:11.056 ]' 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.056 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.313 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.313 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.313 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.570 23:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:12.136 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.393 23:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.651 00:15:12.651 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.651 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.651 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.909 { 00:15:12.909 "cntlid": 107, 00:15:12.909 "qid": 0, 00:15:12.909 "state": "enabled", 00:15:12.909 "thread": "nvmf_tgt_poll_group_000", 00:15:12.909 "listen_address": { 00:15:12.909 "trtype": "TCP", 00:15:12.909 "adrfam": "IPv4", 00:15:12.909 "traddr": "10.0.0.2", 00:15:12.909 "trsvcid": "4420" 00:15:12.909 }, 00:15:12.909 "peer_address": { 00:15:12.909 "trtype": "TCP", 00:15:12.909 "adrfam": "IPv4", 00:15:12.909 "traddr": "10.0.0.1", 00:15:12.909 "trsvcid": "55768" 00:15:12.909 }, 00:15:12.909 "auth": { 00:15:12.909 "state": "completed", 00:15:12.909 "digest": "sha512", 00:15:12.909 "dhgroup": "ffdhe2048" 00:15:12.909 } 00:15:12.909 } 00:15:12.909 ]' 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.909 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.167 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.167 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.167 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.167 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.167 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.425 23:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:13.989 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.247 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.506 00:15:14.506 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.506 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.506 23:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.764 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.764 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.764 23:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.764 23:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.764 23:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.764 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.764 { 00:15:14.764 "cntlid": 109, 00:15:14.764 "qid": 0, 00:15:14.764 "state": "enabled", 00:15:14.764 "thread": "nvmf_tgt_poll_group_000", 00:15:14.764 "listen_address": { 00:15:14.764 "trtype": "TCP", 00:15:14.764 "adrfam": "IPv4", 00:15:14.764 "traddr": "10.0.0.2", 00:15:14.764 "trsvcid": "4420" 00:15:14.764 }, 00:15:14.765 "peer_address": { 00:15:14.765 "trtype": "TCP", 00:15:14.765 "adrfam": "IPv4", 00:15:14.765 "traddr": "10.0.0.1", 00:15:14.765 "trsvcid": "55798" 00:15:14.765 }, 00:15:14.765 "auth": { 00:15:14.765 "state": "completed", 00:15:14.765 "digest": "sha512", 00:15:14.765 "dhgroup": "ffdhe2048" 00:15:14.765 } 00:15:14.765 } 00:15:14.765 ]' 00:15:14.765 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.765 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.023 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.023 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.023 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.023 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.023 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.023 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.282 23:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:15.848 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.117 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.377 00:15:16.377 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.377 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.377 23:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.944 { 00:15:16.944 "cntlid": 111, 00:15:16.944 "qid": 0, 00:15:16.944 "state": "enabled", 00:15:16.944 "thread": "nvmf_tgt_poll_group_000", 00:15:16.944 "listen_address": { 00:15:16.944 "trtype": "TCP", 00:15:16.944 "adrfam": "IPv4", 00:15:16.944 "traddr": "10.0.0.2", 00:15:16.944 "trsvcid": "4420" 00:15:16.944 }, 00:15:16.944 "peer_address": { 00:15:16.944 "trtype": "TCP", 00:15:16.944 "adrfam": "IPv4", 00:15:16.944 "traddr": "10.0.0.1", 00:15:16.944 "trsvcid": "55820" 00:15:16.944 }, 00:15:16.944 "auth": { 00:15:16.944 "state": "completed", 00:15:16.944 "digest": "sha512", 00:15:16.944 "dhgroup": "ffdhe2048" 00:15:16.944 } 00:15:16.944 } 00:15:16.944 ]' 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.944 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.203 23:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:17.770 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.029 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.288 00:15:18.547 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.547 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.547 23:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.547 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.547 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.547 23:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.547 23:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.806 { 00:15:18.806 "cntlid": 113, 00:15:18.806 "qid": 0, 00:15:18.806 "state": "enabled", 00:15:18.806 "thread": "nvmf_tgt_poll_group_000", 00:15:18.806 "listen_address": { 00:15:18.806 "trtype": "TCP", 00:15:18.806 "adrfam": "IPv4", 00:15:18.806 "traddr": "10.0.0.2", 00:15:18.806 "trsvcid": "4420" 00:15:18.806 }, 00:15:18.806 "peer_address": { 00:15:18.806 "trtype": "TCP", 00:15:18.806 "adrfam": "IPv4", 00:15:18.806 "traddr": "10.0.0.1", 00:15:18.806 "trsvcid": "55850" 00:15:18.806 }, 00:15:18.806 "auth": { 00:15:18.806 "state": "completed", 00:15:18.806 "digest": "sha512", 00:15:18.806 "dhgroup": "ffdhe3072" 00:15:18.806 } 00:15:18.806 } 00:15:18.806 ]' 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.806 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.065 23:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.999 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.000 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.567 00:15:20.567 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.567 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.567 23:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.825 { 00:15:20.825 "cntlid": 115, 00:15:20.825 "qid": 0, 00:15:20.825 "state": "enabled", 00:15:20.825 "thread": "nvmf_tgt_poll_group_000", 00:15:20.825 "listen_address": { 00:15:20.825 "trtype": "TCP", 00:15:20.825 "adrfam": "IPv4", 00:15:20.825 "traddr": "10.0.0.2", 00:15:20.825 "trsvcid": "4420" 00:15:20.825 }, 00:15:20.825 "peer_address": { 00:15:20.825 "trtype": "TCP", 00:15:20.825 "adrfam": "IPv4", 00:15:20.825 "traddr": "10.0.0.1", 00:15:20.825 "trsvcid": "53964" 00:15:20.825 }, 00:15:20.825 "auth": { 00:15:20.825 "state": "completed", 00:15:20.825 "digest": "sha512", 00:15:20.825 "dhgroup": "ffdhe3072" 00:15:20.825 } 00:15:20.825 } 00:15:20.825 ]' 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.825 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.099 23:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:22.036 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.295 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.554 00:15:22.554 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.554 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.554 23:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.812 { 00:15:22.812 "cntlid": 117, 00:15:22.812 "qid": 0, 00:15:22.812 "state": "enabled", 00:15:22.812 "thread": "nvmf_tgt_poll_group_000", 00:15:22.812 "listen_address": { 00:15:22.812 "trtype": "TCP", 00:15:22.812 "adrfam": "IPv4", 00:15:22.812 "traddr": "10.0.0.2", 00:15:22.812 "trsvcid": "4420" 00:15:22.812 }, 00:15:22.812 "peer_address": { 00:15:22.812 "trtype": "TCP", 00:15:22.812 "adrfam": "IPv4", 00:15:22.812 "traddr": "10.0.0.1", 00:15:22.812 "trsvcid": "53992" 00:15:22.812 }, 00:15:22.812 "auth": { 00:15:22.812 "state": "completed", 00:15:22.812 "digest": "sha512", 00:15:22.812 "dhgroup": "ffdhe3072" 00:15:22.812 } 00:15:22.812 } 00:15:22.812 ]' 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.812 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.070 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.070 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.070 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.328 23:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:15:23.894 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.894 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:23.894 23:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.894 23:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.894 23:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.894 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.895 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:23.895 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.153 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.413 00:15:24.413 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.413 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.413 23:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.671 { 00:15:24.671 "cntlid": 119, 00:15:24.671 "qid": 0, 00:15:24.671 "state": "enabled", 00:15:24.671 "thread": "nvmf_tgt_poll_group_000", 00:15:24.671 "listen_address": { 00:15:24.671 "trtype": "TCP", 00:15:24.671 "adrfam": "IPv4", 00:15:24.671 "traddr": "10.0.0.2", 00:15:24.671 "trsvcid": "4420" 00:15:24.671 }, 00:15:24.671 "peer_address": { 00:15:24.671 "trtype": "TCP", 00:15:24.671 "adrfam": "IPv4", 00:15:24.671 "traddr": "10.0.0.1", 00:15:24.671 "trsvcid": "54014" 00:15:24.671 }, 00:15:24.671 "auth": { 00:15:24.671 "state": "completed", 00:15:24.671 "digest": "sha512", 00:15:24.671 "dhgroup": "ffdhe3072" 00:15:24.671 } 00:15:24.671 } 00:15:24.671 ]' 00:15:24.671 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.930 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.189 23:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:25.755 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:26.014 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:15:26.014 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.014 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:26.014 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.014 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.014 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.015 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.015 23:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.015 23:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.015 23:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.015 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.015 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.581 00:15:26.581 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.581 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.581 23:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.839 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.839 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.839 23:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.839 23:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.839 23:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.839 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.839 { 00:15:26.839 "cntlid": 121, 00:15:26.839 "qid": 0, 00:15:26.839 "state": "enabled", 00:15:26.839 "thread": "nvmf_tgt_poll_group_000", 00:15:26.840 "listen_address": { 00:15:26.840 "trtype": "TCP", 00:15:26.840 "adrfam": "IPv4", 00:15:26.840 "traddr": "10.0.0.2", 00:15:26.840 "trsvcid": "4420" 00:15:26.840 }, 00:15:26.840 "peer_address": { 00:15:26.840 "trtype": "TCP", 00:15:26.840 "adrfam": "IPv4", 00:15:26.840 "traddr": "10.0.0.1", 00:15:26.840 "trsvcid": "54040" 00:15:26.840 }, 00:15:26.840 "auth": { 00:15:26.840 "state": "completed", 00:15:26.840 "digest": "sha512", 00:15:26.840 "dhgroup": "ffdhe4096" 00:15:26.840 } 00:15:26.840 } 00:15:26.840 ]' 00:15:26.840 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.840 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.840 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.840 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.840 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.097 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.097 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.097 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.355 23:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:27.922 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.180 23:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.439 23:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.439 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.439 23:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.697 00:15:28.697 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.697 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.697 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.955 { 00:15:28.955 "cntlid": 123, 00:15:28.955 "qid": 0, 00:15:28.955 "state": "enabled", 00:15:28.955 "thread": "nvmf_tgt_poll_group_000", 00:15:28.955 "listen_address": { 00:15:28.955 "trtype": "TCP", 00:15:28.955 "adrfam": "IPv4", 00:15:28.955 "traddr": "10.0.0.2", 00:15:28.955 "trsvcid": "4420" 00:15:28.955 }, 00:15:28.955 "peer_address": { 00:15:28.955 "trtype": "TCP", 00:15:28.955 "adrfam": "IPv4", 00:15:28.955 "traddr": "10.0.0.1", 00:15:28.955 "trsvcid": "54062" 00:15:28.955 }, 00:15:28.955 "auth": { 00:15:28.955 "state": "completed", 00:15:28.955 "digest": "sha512", 00:15:28.955 "dhgroup": "ffdhe4096" 00:15:28.955 } 00:15:28.955 } 00:15:28.955 ]' 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.955 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.213 23:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.148 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.406 23:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.664 00:15:30.664 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.664 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.664 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.922 { 00:15:30.922 "cntlid": 125, 00:15:30.922 "qid": 0, 00:15:30.922 "state": "enabled", 00:15:30.922 "thread": "nvmf_tgt_poll_group_000", 00:15:30.922 "listen_address": { 00:15:30.922 "trtype": "TCP", 00:15:30.922 "adrfam": "IPv4", 00:15:30.922 "traddr": "10.0.0.2", 00:15:30.922 "trsvcid": "4420" 00:15:30.922 }, 00:15:30.922 "peer_address": { 00:15:30.922 "trtype": "TCP", 00:15:30.922 "adrfam": "IPv4", 00:15:30.922 "traddr": "10.0.0.1", 00:15:30.922 "trsvcid": "60002" 00:15:30.922 }, 00:15:30.922 "auth": { 00:15:30.922 "state": "completed", 00:15:30.922 "digest": "sha512", 00:15:30.922 "dhgroup": "ffdhe4096" 00:15:30.922 } 00:15:30.922 } 00:15:30.922 ]' 00:15:30.922 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.180 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.438 23:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.005 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.264 23:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.831 00:15:32.831 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.831 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.831 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.090 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.090 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.090 23:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.090 23:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.090 23:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.090 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.091 { 00:15:33.091 "cntlid": 127, 00:15:33.091 "qid": 0, 00:15:33.091 "state": "enabled", 00:15:33.091 "thread": "nvmf_tgt_poll_group_000", 00:15:33.091 "listen_address": { 00:15:33.091 "trtype": "TCP", 00:15:33.091 "adrfam": "IPv4", 00:15:33.091 "traddr": "10.0.0.2", 00:15:33.091 "trsvcid": "4420" 00:15:33.091 }, 00:15:33.091 "peer_address": { 00:15:33.091 "trtype": "TCP", 00:15:33.091 "adrfam": "IPv4", 00:15:33.091 "traddr": "10.0.0.1", 00:15:33.091 "trsvcid": "60022" 00:15:33.091 }, 00:15:33.091 "auth": { 00:15:33.091 "state": "completed", 00:15:33.091 "digest": "sha512", 00:15:33.091 "dhgroup": "ffdhe4096" 00:15:33.091 } 00:15:33.091 } 00:15:33.091 ]' 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.091 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.349 23:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:33.917 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:34.175 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:15:34.175 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.175 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:34.175 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.175 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.176 23:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.743 00:15:34.743 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.743 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.743 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.001 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.002 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.002 23:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.002 23:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.002 23:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.002 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.002 { 00:15:35.002 "cntlid": 129, 00:15:35.002 "qid": 0, 00:15:35.002 "state": "enabled", 00:15:35.002 "thread": "nvmf_tgt_poll_group_000", 00:15:35.002 "listen_address": { 00:15:35.002 "trtype": "TCP", 00:15:35.002 "adrfam": "IPv4", 00:15:35.002 "traddr": "10.0.0.2", 00:15:35.002 "trsvcid": "4420" 00:15:35.002 }, 00:15:35.002 "peer_address": { 00:15:35.002 "trtype": "TCP", 00:15:35.002 "adrfam": "IPv4", 00:15:35.002 "traddr": "10.0.0.1", 00:15:35.003 "trsvcid": "60032" 00:15:35.003 }, 00:15:35.003 "auth": { 00:15:35.003 "state": "completed", 00:15:35.003 "digest": "sha512", 00:15:35.003 "dhgroup": "ffdhe6144" 00:15:35.003 } 00:15:35.003 } 00:15:35.003 ]' 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.003 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.263 23:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.196 23:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.804 00:15:36.804 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.804 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.804 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.063 { 00:15:37.063 "cntlid": 131, 00:15:37.063 "qid": 0, 00:15:37.063 "state": "enabled", 00:15:37.063 "thread": "nvmf_tgt_poll_group_000", 00:15:37.063 "listen_address": { 00:15:37.063 "trtype": "TCP", 00:15:37.063 "adrfam": "IPv4", 00:15:37.063 "traddr": "10.0.0.2", 00:15:37.063 "trsvcid": "4420" 00:15:37.063 }, 00:15:37.063 "peer_address": { 00:15:37.063 "trtype": "TCP", 00:15:37.063 "adrfam": "IPv4", 00:15:37.063 "traddr": "10.0.0.1", 00:15:37.063 "trsvcid": "60066" 00:15:37.063 }, 00:15:37.063 "auth": { 00:15:37.063 "state": "completed", 00:15:37.063 "digest": "sha512", 00:15:37.063 "dhgroup": "ffdhe6144" 00:15:37.063 } 00:15:37.063 } 00:15:37.063 ]' 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.063 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.322 23:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.256 23:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.847 00:15:38.847 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.847 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.847 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.120 { 00:15:39.120 "cntlid": 133, 00:15:39.120 "qid": 0, 00:15:39.120 "state": "enabled", 00:15:39.120 "thread": "nvmf_tgt_poll_group_000", 00:15:39.120 "listen_address": { 00:15:39.120 "trtype": "TCP", 00:15:39.120 "adrfam": "IPv4", 00:15:39.120 "traddr": "10.0.0.2", 00:15:39.120 "trsvcid": "4420" 00:15:39.120 }, 00:15:39.120 "peer_address": { 00:15:39.120 "trtype": "TCP", 00:15:39.120 "adrfam": "IPv4", 00:15:39.120 "traddr": "10.0.0.1", 00:15:39.120 "trsvcid": "60100" 00:15:39.120 }, 00:15:39.120 "auth": { 00:15:39.120 "state": "completed", 00:15:39.120 "digest": "sha512", 00:15:39.120 "dhgroup": "ffdhe6144" 00:15:39.120 } 00:15:39.120 } 00:15:39.120 ]' 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.120 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.379 23:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.968 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.227 23:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.485 23:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.485 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.485 23:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.743 00:15:40.743 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.743 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.743 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.310 { 00:15:41.310 "cntlid": 135, 00:15:41.310 "qid": 0, 00:15:41.310 "state": "enabled", 00:15:41.310 "thread": "nvmf_tgt_poll_group_000", 00:15:41.310 "listen_address": { 00:15:41.310 "trtype": "TCP", 00:15:41.310 "adrfam": "IPv4", 00:15:41.310 "traddr": "10.0.0.2", 00:15:41.310 "trsvcid": "4420" 00:15:41.310 }, 00:15:41.310 "peer_address": { 00:15:41.310 "trtype": "TCP", 00:15:41.310 "adrfam": "IPv4", 00:15:41.310 "traddr": "10.0.0.1", 00:15:41.310 "trsvcid": "45748" 00:15:41.310 }, 00:15:41.310 "auth": { 00:15:41.310 "state": "completed", 00:15:41.310 "digest": "sha512", 00:15:41.310 "dhgroup": "ffdhe6144" 00:15:41.310 } 00:15:41.310 } 00:15:41.310 ]' 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.310 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.568 23:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:42.502 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.503 23:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.439 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.439 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.439 { 00:15:43.439 "cntlid": 137, 00:15:43.439 "qid": 0, 00:15:43.439 "state": "enabled", 00:15:43.439 "thread": "nvmf_tgt_poll_group_000", 00:15:43.439 "listen_address": { 00:15:43.439 "trtype": "TCP", 00:15:43.439 "adrfam": "IPv4", 00:15:43.439 "traddr": "10.0.0.2", 00:15:43.439 "trsvcid": "4420" 00:15:43.439 }, 00:15:43.439 "peer_address": { 00:15:43.439 "trtype": "TCP", 00:15:43.440 "adrfam": "IPv4", 00:15:43.440 "traddr": "10.0.0.1", 00:15:43.440 "trsvcid": "45768" 00:15:43.440 }, 00:15:43.440 "auth": { 00:15:43.440 "state": "completed", 00:15:43.440 "digest": "sha512", 00:15:43.440 "dhgroup": "ffdhe8192" 00:15:43.440 } 00:15:43.440 } 00:15:43.440 ]' 00:15:43.440 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.698 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.698 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.698 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.698 23:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.698 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.698 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.698 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.975 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.541 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.542 23:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.800 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.801 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.368 00:15:45.368 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.368 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.368 23:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.626 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.626 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.626 23:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.626 23:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.885 { 00:15:45.885 "cntlid": 139, 00:15:45.885 "qid": 0, 00:15:45.885 "state": "enabled", 00:15:45.885 "thread": "nvmf_tgt_poll_group_000", 00:15:45.885 "listen_address": { 00:15:45.885 "trtype": "TCP", 00:15:45.885 "adrfam": "IPv4", 00:15:45.885 "traddr": "10.0.0.2", 00:15:45.885 "trsvcid": "4420" 00:15:45.885 }, 00:15:45.885 "peer_address": { 00:15:45.885 "trtype": "TCP", 00:15:45.885 "adrfam": "IPv4", 00:15:45.885 "traddr": "10.0.0.1", 00:15:45.885 "trsvcid": "45792" 00:15:45.885 }, 00:15:45.885 "auth": { 00:15:45.885 "state": "completed", 00:15:45.885 "digest": "sha512", 00:15:45.885 "dhgroup": "ffdhe8192" 00:15:45.885 } 00:15:45.885 } 00:15:45.885 ]' 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.885 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.143 23:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:01:YmEwMmE3M2NkZDVjZTZiYzdkNGE1YjhiODU5MDA4NDXod/R3: --dhchap-ctrl-secret DHHC-1:02:ZTllMzYwOTczYzQ2ZmI5NTFiMzE4ZWY2ODk1Mjk5NTgyNTU0ZDI0MWNhOWUyNDVix2YyfQ==: 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.710 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.968 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.535 00:15:47.535 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.535 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.535 23:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.793 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.793 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.793 23:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.793 23:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.051 { 00:15:48.051 "cntlid": 141, 00:15:48.051 "qid": 0, 00:15:48.051 "state": "enabled", 00:15:48.051 "thread": "nvmf_tgt_poll_group_000", 00:15:48.051 "listen_address": { 00:15:48.051 "trtype": "TCP", 00:15:48.051 "adrfam": "IPv4", 00:15:48.051 "traddr": "10.0.0.2", 00:15:48.051 "trsvcid": "4420" 00:15:48.051 }, 00:15:48.051 "peer_address": { 00:15:48.051 "trtype": "TCP", 00:15:48.051 "adrfam": "IPv4", 00:15:48.051 "traddr": "10.0.0.1", 00:15:48.051 "trsvcid": "45838" 00:15:48.051 }, 00:15:48.051 "auth": { 00:15:48.051 "state": "completed", 00:15:48.051 "digest": "sha512", 00:15:48.051 "dhgroup": "ffdhe8192" 00:15:48.051 } 00:15:48.051 } 00:15:48.051 ]' 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.051 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.310 23:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:02:NDgzYTBlZGI0ZTczNGMyYzYyNTYyN2JiZDMyNDkyODVhOWZhZmNjMDU0NDQ4MWNlEDEbrg==: --dhchap-ctrl-secret DHHC-1:01:ODA4NGM3MGQ0YTI1ZjYyNTVkNWMyOGMzYzJjMDMwM2LAAI/z: 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.245 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.504 23:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.071 00:15:50.071 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.071 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.071 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.329 { 00:15:50.329 "cntlid": 143, 00:15:50.329 "qid": 0, 00:15:50.329 "state": "enabled", 00:15:50.329 "thread": "nvmf_tgt_poll_group_000", 00:15:50.329 "listen_address": { 00:15:50.329 "trtype": "TCP", 00:15:50.329 "adrfam": "IPv4", 00:15:50.329 "traddr": "10.0.0.2", 00:15:50.329 "trsvcid": "4420" 00:15:50.329 }, 00:15:50.329 "peer_address": { 00:15:50.329 "trtype": "TCP", 00:15:50.329 "adrfam": "IPv4", 00:15:50.329 "traddr": "10.0.0.1", 00:15:50.329 "trsvcid": "52200" 00:15:50.329 }, 00:15:50.329 "auth": { 00:15:50.329 "state": "completed", 00:15:50.329 "digest": "sha512", 00:15:50.329 "dhgroup": "ffdhe8192" 00:15:50.329 } 00:15:50.329 } 00:15:50.329 ]' 00:15:50.329 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.587 23:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.846 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:51.411 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.411 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:51.411 23:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.411 23:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.411 23:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.412 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:51.412 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:15:51.412 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:15:51.412 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.412 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.412 23:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.977 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.545 00:15:52.545 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.545 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.545 23:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.807 { 00:15:52.807 "cntlid": 145, 00:15:52.807 "qid": 0, 00:15:52.807 "state": "enabled", 00:15:52.807 "thread": "nvmf_tgt_poll_group_000", 00:15:52.807 "listen_address": { 00:15:52.807 "trtype": "TCP", 00:15:52.807 "adrfam": "IPv4", 00:15:52.807 "traddr": "10.0.0.2", 00:15:52.807 "trsvcid": "4420" 00:15:52.807 }, 00:15:52.807 "peer_address": { 00:15:52.807 "trtype": "TCP", 00:15:52.807 "adrfam": "IPv4", 00:15:52.807 "traddr": "10.0.0.1", 00:15:52.807 "trsvcid": "52228" 00:15:52.807 }, 00:15:52.807 "auth": { 00:15:52.807 "state": "completed", 00:15:52.807 "digest": "sha512", 00:15:52.807 "dhgroup": "ffdhe8192" 00:15:52.807 } 00:15:52.807 } 00:15:52.807 ]' 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.807 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.065 23:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:00:Yzc2YTMwMmVmYzgzMWJjNWEwN2Y1ZjdhNWU5ZWUyMzcyNWJkYWQ4ZjBjYTljMTM1YR2INA==: --dhchap-ctrl-secret DHHC-1:03:YTA5NTQ0NDg4MjhlOGE3YTQzOTkyNzE2YWU2OWI4NTE4MDg1OTYxOWJmOTg3MjNhNGEyOTVmY2I2ODQyNDBiY9OIVgc=: 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:54.000 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:54.258 request: 00:15:54.259 { 00:15:54.259 "name": "nvme0", 00:15:54.259 "trtype": "tcp", 00:15:54.259 "traddr": "10.0.0.2", 00:15:54.259 "adrfam": "ipv4", 00:15:54.259 "trsvcid": "4420", 00:15:54.259 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:54.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53", 00:15:54.259 "prchk_reftag": false, 00:15:54.259 "prchk_guard": false, 00:15:54.259 "hdgst": false, 00:15:54.259 "ddgst": false, 00:15:54.259 "dhchap_key": "key2", 00:15:54.259 "method": "bdev_nvme_attach_controller", 00:15:54.259 "req_id": 1 00:15:54.259 } 00:15:54.259 Got JSON-RPC error response 00:15:54.259 response: 00:15:54.259 { 00:15:54.259 "code": -5, 00:15:54.259 "message": "Input/output error" 00:15:54.259 } 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.259 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.517 23:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:55.085 request: 00:15:55.085 { 00:15:55.085 "name": "nvme0", 00:15:55.085 "trtype": "tcp", 00:15:55.085 "traddr": "10.0.0.2", 00:15:55.085 "adrfam": "ipv4", 00:15:55.085 "trsvcid": "4420", 00:15:55.085 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:55.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53", 00:15:55.085 "prchk_reftag": false, 00:15:55.085 "prchk_guard": false, 00:15:55.085 "hdgst": false, 00:15:55.085 "ddgst": false, 00:15:55.085 "dhchap_key": "key1", 00:15:55.085 "dhchap_ctrlr_key": "ckey2", 00:15:55.085 "method": "bdev_nvme_attach_controller", 00:15:55.085 "req_id": 1 00:15:55.085 } 00:15:55.085 Got JSON-RPC error response 00:15:55.085 response: 00:15:55.085 { 00:15:55.085 "code": -5, 00:15:55.085 "message": "Input/output error" 00:15:55.085 } 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key1 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.085 23:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.669 request: 00:15:55.669 { 00:15:55.669 "name": "nvme0", 00:15:55.669 "trtype": "tcp", 00:15:55.669 "traddr": "10.0.0.2", 00:15:55.669 "adrfam": "ipv4", 00:15:55.669 "trsvcid": "4420", 00:15:55.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:55.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53", 00:15:55.669 "prchk_reftag": false, 00:15:55.669 "prchk_guard": false, 00:15:55.669 "hdgst": false, 00:15:55.669 "ddgst": false, 00:15:55.669 "dhchap_key": "key1", 00:15:55.669 "dhchap_ctrlr_key": "ckey1", 00:15:55.669 "method": "bdev_nvme_attach_controller", 00:15:55.669 "req_id": 1 00:15:55.669 } 00:15:55.669 Got JSON-RPC error response 00:15:55.669 response: 00:15:55.669 { 00:15:55.669 "code": -5, 00:15:55.669 "message": "Input/output error" 00:15:55.669 } 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69590 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69590 ']' 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69590 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69590 00:15:55.669 killing process with pid 69590 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69590' 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69590 00:15:55.669 23:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69590 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72635 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72635 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72635 ']' 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.928 23:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72635 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72635 ']' 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.903 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.161 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.161 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:57.161 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:15:57.161 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.161 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.420 23:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.986 00:15:57.986 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.986 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.986 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.245 { 00:15:58.245 "cntlid": 1, 00:15:58.245 "qid": 0, 00:15:58.245 "state": "enabled", 00:15:58.245 "thread": "nvmf_tgt_poll_group_000", 00:15:58.245 "listen_address": { 00:15:58.245 "trtype": "TCP", 00:15:58.245 "adrfam": "IPv4", 00:15:58.245 "traddr": "10.0.0.2", 00:15:58.245 "trsvcid": "4420" 00:15:58.245 }, 00:15:58.245 "peer_address": { 00:15:58.245 "trtype": "TCP", 00:15:58.245 "adrfam": "IPv4", 00:15:58.245 "traddr": "10.0.0.1", 00:15:58.245 "trsvcid": "52294" 00:15:58.245 }, 00:15:58.245 "auth": { 00:15:58.245 "state": "completed", 00:15:58.245 "digest": "sha512", 00:15:58.245 "dhgroup": "ffdhe8192" 00:15:58.245 } 00:15:58.245 } 00:15:58.245 ]' 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.245 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.504 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.504 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.504 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.504 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.504 23:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.762 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-secret DHHC-1:03:Yzg1NmMxY2ZmZTAxZTg0MDRhYjBiODBkZGRhODZjNTE0YzU5MjFlYThiZThiMTlmZDY0ZWZhZjU0ZWJmYzU1ZfaUBg8=: 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --dhchap-key key3 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:59.328 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.586 23:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.845 request: 00:15:59.845 { 00:15:59.845 "name": "nvme0", 00:15:59.845 "trtype": "tcp", 00:15:59.845 "traddr": "10.0.0.2", 00:15:59.845 "adrfam": "ipv4", 00:15:59.845 "trsvcid": "4420", 00:15:59.845 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:59.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53", 00:15:59.845 "prchk_reftag": false, 00:15:59.845 "prchk_guard": false, 00:15:59.845 "hdgst": false, 00:15:59.845 "ddgst": false, 00:15:59.845 "dhchap_key": "key3", 00:15:59.845 "method": "bdev_nvme_attach_controller", 00:15:59.845 "req_id": 1 00:15:59.845 } 00:15:59.845 Got JSON-RPC error response 00:15:59.845 response: 00:15:59.845 { 00:15:59.845 "code": -5, 00:15:59.845 "message": "Input/output error" 00:15:59.845 } 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:59.845 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.105 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.364 request: 00:16:00.364 { 00:16:00.364 "name": "nvme0", 00:16:00.364 "trtype": "tcp", 00:16:00.364 "traddr": "10.0.0.2", 00:16:00.364 "adrfam": "ipv4", 00:16:00.364 "trsvcid": "4420", 00:16:00.364 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:00.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53", 00:16:00.364 "prchk_reftag": false, 00:16:00.364 "prchk_guard": false, 00:16:00.364 "hdgst": false, 00:16:00.364 "ddgst": false, 00:16:00.364 "dhchap_key": "key3", 00:16:00.364 "method": "bdev_nvme_attach_controller", 00:16:00.364 "req_id": 1 00:16:00.364 } 00:16:00.364 Got JSON-RPC error response 00:16:00.364 response: 00:16:00.364 { 00:16:00.364 "code": -5, 00:16:00.364 "message": "Input/output error" 00:16:00.364 } 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:00.364 23:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.623 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:00.882 request: 00:16:00.882 { 00:16:00.882 "name": "nvme0", 00:16:00.882 "trtype": "tcp", 00:16:00.882 "traddr": "10.0.0.2", 00:16:00.882 "adrfam": "ipv4", 00:16:00.882 "trsvcid": "4420", 00:16:00.882 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:00.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53", 00:16:00.882 "prchk_reftag": false, 00:16:00.882 "prchk_guard": false, 00:16:00.882 "hdgst": false, 00:16:00.882 "ddgst": false, 00:16:00.882 "dhchap_key": "key0", 00:16:00.882 "dhchap_ctrlr_key": "key1", 00:16:00.882 "method": "bdev_nvme_attach_controller", 00:16:00.882 "req_id": 1 00:16:00.882 } 00:16:00.882 Got JSON-RPC error response 00:16:00.882 response: 00:16:00.882 { 00:16:00.882 "code": -5, 00:16:00.882 "message": "Input/output error" 00:16:00.882 } 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.882 23:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:01.140 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:01.140 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:01.398 00:16:01.398 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:16:01.398 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:16:01.398 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.656 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.656 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.656 23:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69622 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69622 ']' 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69622 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69622 00:16:01.926 killing process with pid 69622 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69622' 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69622 00:16:01.926 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69622 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.496 rmmod nvme_tcp 00:16:02.496 rmmod nvme_fabrics 00:16:02.496 rmmod nvme_keyring 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72635 ']' 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72635 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72635 ']' 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72635 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72635 00:16:02.496 killing process with pid 72635 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72635' 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72635 00:16:02.496 23:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72635 00:16:02.754 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.754 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.754 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cmE /tmp/spdk.key-sha256.dIL /tmp/spdk.key-sha384.Y5u /tmp/spdk.key-sha512.nij /tmp/spdk.key-sha512.KZH /tmp/spdk.key-sha384.Pmf /tmp/spdk.key-sha256.Yk5 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:02.755 ************************************ 00:16:02.755 END TEST nvmf_auth_target 00:16:02.755 ************************************ 00:16:02.755 00:16:02.755 real 2m51.121s 00:16:02.755 user 6m48.028s 00:16:02.755 sys 0m28.026s 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.755 23:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.013 23:16:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:03.013 23:16:25 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:16:03.013 23:16:25 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:03.013 23:16:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:03.013 23:16:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.013 23:16:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.013 ************************************ 00:16:03.013 START TEST nvmf_bdevio_no_huge 00:16:03.013 ************************************ 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:03.013 * Looking for test storage... 00:16:03.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:03.013 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:03.014 Cannot find device "nvmf_tgt_br" 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.014 Cannot find device "nvmf_tgt_br2" 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:03.014 Cannot find device "nvmf_tgt_br" 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:03.014 Cannot find device "nvmf_tgt_br2" 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:16:03.014 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:03.272 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:03.272 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.272 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:03.272 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.272 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:03.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:16:03.273 00:16:03.273 --- 10.0.0.2 ping statistics --- 00:16:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.273 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:03.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:03.273 00:16:03.273 --- 10.0.0.3 ping statistics --- 00:16:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.273 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:03.273 00:16:03.273 --- 10.0.0.1 ping statistics --- 00:16:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.273 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.273 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72950 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72950 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72950 ']' 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.531 23:16:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:03.531 [2024-07-24 23:16:25.843668] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:03.531 [2024-07-24 23:16:25.843763] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:03.531 [2024-07-24 23:16:25.997116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.790 [2024-07-24 23:16:26.142916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.790 [2024-07-24 23:16:26.142962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.790 [2024-07-24 23:16:26.142976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.790 [2024-07-24 23:16:26.142987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.790 [2024-07-24 23:16:26.142996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.790 [2024-07-24 23:16:26.143212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:03.790 [2024-07-24 23:16:26.144276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.790 [2024-07-24 23:16:26.144105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:03.790 [2024-07-24 23:16:26.144259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:03.790 [2024-07-24 23:16:26.150172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 [2024-07-24 23:16:26.904799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 Malloc0 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:04.724 [2024-07-24 23:16:26.948053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:04.724 { 00:16:04.724 "params": { 00:16:04.724 "name": "Nvme$subsystem", 00:16:04.724 "trtype": "$TEST_TRANSPORT", 00:16:04.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:04.724 "adrfam": "ipv4", 00:16:04.724 "trsvcid": "$NVMF_PORT", 00:16:04.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:04.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:04.724 "hdgst": ${hdgst:-false}, 00:16:04.724 "ddgst": ${ddgst:-false} 00:16:04.724 }, 00:16:04.724 "method": "bdev_nvme_attach_controller" 00:16:04.724 } 00:16:04.724 EOF 00:16:04.724 )") 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:04.724 23:16:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:04.724 "params": { 00:16:04.724 "name": "Nvme1", 00:16:04.725 "trtype": "tcp", 00:16:04.725 "traddr": "10.0.0.2", 00:16:04.725 "adrfam": "ipv4", 00:16:04.725 "trsvcid": "4420", 00:16:04.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:04.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:04.725 "hdgst": false, 00:16:04.725 "ddgst": false 00:16:04.725 }, 00:16:04.725 "method": "bdev_nvme_attach_controller" 00:16:04.725 }' 00:16:04.725 [2024-07-24 23:16:27.004436] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:04.725 [2024-07-24 23:16:27.004568] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72986 ] 00:16:04.725 [2024-07-24 23:16:27.148827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.982 [2024-07-24 23:16:27.313169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.982 [2024-07-24 23:16:27.313295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.982 [2024-07-24 23:16:27.313305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.982 [2024-07-24 23:16:27.327814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:05.240 I/O targets: 00:16:05.240 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:05.240 00:16:05.240 00:16:05.240 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.240 http://cunit.sourceforge.net/ 00:16:05.240 00:16:05.240 00:16:05.240 Suite: bdevio tests on: Nvme1n1 00:16:05.240 Test: blockdev write read block ...passed 00:16:05.240 Test: blockdev write zeroes read block ...passed 00:16:05.240 Test: blockdev write zeroes read no split ...passed 00:16:05.240 Test: blockdev write zeroes read split ...passed 00:16:05.240 Test: blockdev write zeroes read split partial ...passed 00:16:05.240 Test: blockdev reset ...[2024-07-24 23:16:27.549575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.240 [2024-07-24 23:16:27.549688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef1870 (9): Bad file descriptor 00:16:05.240 [2024-07-24 23:16:27.568215] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.240 passed 00:16:05.240 Test: blockdev write read 8 blocks ...passed 00:16:05.240 Test: blockdev write read size > 128k ...passed 00:16:05.240 Test: blockdev write read invalid size ...passed 00:16:05.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.240 Test: blockdev write read max offset ...passed 00:16:05.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.240 Test: blockdev writev readv 8 blocks ...passed 00:16:05.240 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.240 Test: blockdev writev readv block ...passed 00:16:05.240 Test: blockdev writev readv size > 128k ...passed 00:16:05.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.240 Test: blockdev comparev and writev ...[2024-07-24 23:16:27.576749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.576889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.576974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.577065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.577625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.577743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.577834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.577907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.578496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.578621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.578700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.578778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.579244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.579349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.579439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.240 [2024-07-24 23:16:27.579509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:05.240 passed 00:16:05.240 Test: blockdev nvme passthru rw ...passed 00:16:05.240 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:16:27.580471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.240 [2024-07-24 23:16:27.580600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.580781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.240 [2024-07-24 23:16:27.580862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.581024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.240 [2024-07-24 23:16:27.581120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:05.240 [2024-07-24 23:16:27.581331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.240 [2024-07-24 23:16:27.581439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:05.240 passed 00:16:05.240 Test: blockdev nvme admin passthru ...passed 00:16:05.240 Test: blockdev copy ...passed 00:16:05.240 00:16:05.240 Run Summary: Type Total Ran Passed Failed Inactive 00:16:05.240 suites 1 1 n/a 0 0 00:16:05.240 tests 23 23 23 0 0 00:16:05.240 asserts 152 152 152 0 n/a 00:16:05.240 00:16:05.240 Elapsed time = 0.185 seconds 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.806 rmmod nvme_tcp 00:16:05.806 rmmod nvme_fabrics 00:16:05.806 rmmod nvme_keyring 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72950 ']' 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72950 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72950 ']' 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72950 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72950 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:05.806 killing process with pid 72950 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72950' 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72950 00:16:05.806 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72950 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:06.372 ************************************ 00:16:06.372 END TEST nvmf_bdevio_no_huge 00:16:06.372 ************************************ 00:16:06.372 00:16:06.372 real 0m3.304s 00:16:06.372 user 0m10.957s 00:16:06.372 sys 0m1.352s 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:06.372 23:16:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:06.372 23:16:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:06.372 23:16:28 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:06.372 23:16:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:06.373 23:16:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.373 23:16:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.373 ************************************ 00:16:06.373 START TEST nvmf_tls 00:16:06.373 ************************************ 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:06.373 * Looking for test storage... 00:16:06.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:06.373 Cannot find device "nvmf_tgt_br" 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.373 Cannot find device "nvmf_tgt_br2" 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:06.373 Cannot find device "nvmf_tgt_br" 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:06.373 Cannot find device "nvmf_tgt_br2" 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:16:06.373 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.631 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:06.631 23:16:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:06.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:06.631 00:16:06.631 --- 10.0.0.2 ping statistics --- 00:16:06.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.631 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:06.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:06.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:16:06.631 00:16:06.631 --- 10.0.0.3 ping statistics --- 00:16:06.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.631 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:06.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:06.631 00:16:06.631 --- 10.0.0.1 ping statistics --- 00:16:06.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.631 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:06.631 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:06.632 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73168 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73168 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73168 ']' 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.889 23:16:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.889 [2024-07-24 23:16:29.205892] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:06.889 [2024-07-24 23:16:29.205998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.889 [2024-07-24 23:16:29.347764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.146 [2024-07-24 23:16:29.508481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.147 [2024-07-24 23:16:29.508574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.147 [2024-07-24 23:16:29.508589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.147 [2024-07-24 23:16:29.508600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.147 [2024-07-24 23:16:29.508610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.147 [2024-07-24 23:16:29.508643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:08.080 true 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:08.080 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:08.645 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:08.645 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:08.645 23:16:30 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:08.645 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:08.645 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:08.903 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:08.903 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:08.903 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:09.161 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:09.161 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:09.419 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:09.419 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:09.419 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:09.419 23:16:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:09.677 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:09.677 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:09.677 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:09.935 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:09.935 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:10.192 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:10.192 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:10.192 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:10.450 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:10.450 23:16:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:10.707 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:10.708 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:10.708 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:10.708 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:10.708 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:10.708 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:10.708 23:16:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.QdyAp5Yl9N 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.VQWvNXQ9w7 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.QdyAp5Yl9N 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.VQWvNXQ9w7 00:16:10.966 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:11.223 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:11.481 [2024-07-24 23:16:33.883527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:11.481 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.QdyAp5Yl9N 00:16:11.481 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.QdyAp5Yl9N 00:16:11.481 23:16:33 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:11.739 [2024-07-24 23:16:34.166024] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.739 23:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:11.997 23:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:12.255 [2024-07-24 23:16:34.710179] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:12.255 [2024-07-24 23:16:34.710501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.255 23:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:12.514 malloc0 00:16:12.514 23:16:34 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:12.771 23:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QdyAp5Yl9N 00:16:13.028 [2024-07-24 23:16:35.452844] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:13.028 23:16:35 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QdyAp5Yl9N 00:16:25.227 Initializing NVMe Controllers 00:16:25.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:25.227 Initialization complete. Launching workers. 00:16:25.227 ======================================================== 00:16:25.227 Latency(us) 00:16:25.227 Device Information : IOPS MiB/s Average min max 00:16:25.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9488.88 37.07 6746.36 1138.08 11279.64 00:16:25.227 ======================================================== 00:16:25.227 Total : 9488.88 37.07 6746.36 1138.08 11279.64 00:16:25.227 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QdyAp5Yl9N 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QdyAp5Yl9N' 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73411 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73411 /var/tmp/bdevperf.sock 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73411 ']' 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.227 23:16:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.227 [2024-07-24 23:16:45.727197] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:25.227 [2024-07-24 23:16:45.727309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73411 ] 00:16:25.227 [2024-07-24 23:16:45.869660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.227 [2024-07-24 23:16:45.995707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.227 [2024-07-24 23:16:46.073621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:25.227 23:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.227 23:16:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:25.227 23:16:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QdyAp5Yl9N 00:16:25.228 [2024-07-24 23:16:46.961816] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.228 [2024-07-24 23:16:46.961990] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:25.228 TLSTESTn1 00:16:25.228 23:16:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:25.228 Running I/O for 10 seconds... 00:16:35.198 00:16:35.198 Latency(us) 00:16:35.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.198 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:35.198 Verification LBA range: start 0x0 length 0x2000 00:16:35.198 TLSTESTn1 : 10.02 4217.50 16.47 0.00 0.00 30291.00 232.73 21805.61 00:16:35.198 =================================================================================================================== 00:16:35.198 Total : 4217.50 16.47 0.00 0.00 30291.00 232.73 21805.61 00:16:35.198 0 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73411 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73411 ']' 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73411 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73411 00:16:35.198 killing process with pid 73411 00:16:35.198 Received shutdown signal, test time was about 10.000000 seconds 00:16:35.198 00:16:35.198 Latency(us) 00:16:35.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.198 =================================================================================================================== 00:16:35.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73411' 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73411 00:16:35.198 [2024-07-24 23:16:57.240832] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73411 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VQWvNXQ9w7 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VQWvNXQ9w7 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VQWvNXQ9w7 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VQWvNXQ9w7' 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73539 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73539 /var/tmp/bdevperf.sock 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73539 ']' 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.198 23:16:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:35.198 [2024-07-24 23:16:57.527508] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:35.198 [2024-07-24 23:16:57.527596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73539 ] 00:16:35.198 [2024-07-24 23:16:57.661890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.457 [2024-07-24 23:16:57.783717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.458 [2024-07-24 23:16:57.839533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:36.046 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.046 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:36.046 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VQWvNXQ9w7 00:16:36.305 [2024-07-24 23:16:58.698013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:36.305 [2024-07-24 23:16:58.698201] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:36.305 [2024-07-24 23:16:58.707759] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:36.305 [2024-07-24 23:16:58.707796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd61f0 (107): Transport endpoint is not connected 00:16:36.305 [2024-07-24 23:16:58.708783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd61f0 (9): Bad file descriptor 00:16:36.305 [2024-07-24 23:16:58.709780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:36.305 [2024-07-24 23:16:58.709806] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:36.305 [2024-07-24 23:16:58.709819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:36.305 request: 00:16:36.305 { 00:16:36.305 "name": "TLSTEST", 00:16:36.305 "trtype": "tcp", 00:16:36.305 "traddr": "10.0.0.2", 00:16:36.305 "adrfam": "ipv4", 00:16:36.305 "trsvcid": "4420", 00:16:36.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.305 "prchk_reftag": false, 00:16:36.305 "prchk_guard": false, 00:16:36.305 "hdgst": false, 00:16:36.305 "ddgst": false, 00:16:36.305 "psk": "/tmp/tmp.VQWvNXQ9w7", 00:16:36.305 "method": "bdev_nvme_attach_controller", 00:16:36.305 "req_id": 1 00:16:36.305 } 00:16:36.305 Got JSON-RPC error response 00:16:36.305 response: 00:16:36.305 { 00:16:36.305 "code": -5, 00:16:36.305 "message": "Input/output error" 00:16:36.305 } 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73539 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73539 ']' 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73539 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73539 00:16:36.305 killing process with pid 73539 00:16:36.305 Received shutdown signal, test time was about 10.000000 seconds 00:16:36.305 00:16:36.305 Latency(us) 00:16:36.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.305 =================================================================================================================== 00:16:36.305 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73539' 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73539 00:16:36.305 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73539 00:16:36.305 [2024-07-24 23:16:58.758517] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QdyAp5Yl9N 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QdyAp5Yl9N 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QdyAp5Yl9N 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QdyAp5Yl9N' 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73567 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:36.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73567 /var/tmp/bdevperf.sock 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73567 ']' 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.564 23:16:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.564 23:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.564 23:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.564 [2024-07-24 23:16:59.044925] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:36.823 [2024-07-24 23:16:59.045269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73567 ] 00:16:36.823 [2024-07-24 23:16:59.177813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.823 [2024-07-24 23:16:59.290664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.081 [2024-07-24 23:16:59.346336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:37.647 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.647 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:37.647 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.QdyAp5Yl9N 00:16:37.906 [2024-07-24 23:17:00.341152] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.906 [2024-07-24 23:17:00.341602] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:37.906 [2024-07-24 23:17:00.347860] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:37.906 [2024-07-24 23:17:00.348254] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:37.906 [2024-07-24 23:17:00.348484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f61f0 (107)[2024-07-24 23:17:00.348490] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:37.906 : Transport endpoint is not connected 00:16:37.906 [2024-07-24 23:17:00.349468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f61f0 (9): Bad file descriptor 00:16:37.906 [2024-07-24 23:17:00.350449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:37.906 [2024-07-24 23:17:00.350632] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:37.906 [2024-07-24 23:17:00.350737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:37.906 request: 00:16:37.906 { 00:16:37.906 "name": "TLSTEST", 00:16:37.906 "trtype": "tcp", 00:16:37.906 "traddr": "10.0.0.2", 00:16:37.906 "adrfam": "ipv4", 00:16:37.906 "trsvcid": "4420", 00:16:37.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.906 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:37.906 "prchk_reftag": false, 00:16:37.906 "prchk_guard": false, 00:16:37.906 "hdgst": false, 00:16:37.906 "ddgst": false, 00:16:37.906 "psk": "/tmp/tmp.QdyAp5Yl9N", 00:16:37.906 "method": "bdev_nvme_attach_controller", 00:16:37.906 "req_id": 1 00:16:37.906 } 00:16:37.906 Got JSON-RPC error response 00:16:37.906 response: 00:16:37.906 { 00:16:37.906 "code": -5, 00:16:37.906 "message": "Input/output error" 00:16:37.906 } 00:16:37.906 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73567 00:16:37.906 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73567 ']' 00:16:37.906 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73567 00:16:37.906 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:37.906 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.906 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73567 00:16:38.164 killing process with pid 73567 00:16:38.164 Received shutdown signal, test time was about 10.000000 seconds 00:16:38.164 00:16:38.164 Latency(us) 00:16:38.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.164 =================================================================================================================== 00:16:38.164 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73567' 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73567 00:16:38.165 [2024-07-24 23:17:00.400044] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73567 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QdyAp5Yl9N 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QdyAp5Yl9N 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QdyAp5Yl9N 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QdyAp5Yl9N' 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73594 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73594 /var/tmp/bdevperf.sock 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73594 ']' 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:38.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.165 23:17:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:38.423 [2024-07-24 23:17:00.668292] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:38.423 [2024-07-24 23:17:00.668725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73594 ] 00:16:38.423 [2024-07-24 23:17:00.802958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.682 [2024-07-24 23:17:00.914880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.682 [2024-07-24 23:17:00.969408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:39.248 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.248 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:39.248 23:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QdyAp5Yl9N 00:16:39.506 [2024-07-24 23:17:01.931294] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:39.506 [2024-07-24 23:17:01.931434] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:39.506 [2024-07-24 23:17:01.938181] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:39.506 [2024-07-24 23:17:01.938239] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:39.506 [2024-07-24 23:17:01.938375] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:39.506 [2024-07-24 23:17:01.938942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174d1f0 (107): Transport endpoint is not connected 00:16:39.506 [2024-07-24 23:17:01.939945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174d1f0 (9): Bad file descriptor 00:16:39.506 [2024-07-24 23:17:01.940927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:39.506 [2024-07-24 23:17:01.940951] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:39.506 [2024-07-24 23:17:01.940991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:39.506 request: 00:16:39.506 { 00:16:39.506 "name": "TLSTEST", 00:16:39.506 "trtype": "tcp", 00:16:39.506 "traddr": "10.0.0.2", 00:16:39.506 "adrfam": "ipv4", 00:16:39.506 "trsvcid": "4420", 00:16:39.506 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:39.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:39.506 "prchk_reftag": false, 00:16:39.506 "prchk_guard": false, 00:16:39.506 "hdgst": false, 00:16:39.506 "ddgst": false, 00:16:39.506 "psk": "/tmp/tmp.QdyAp5Yl9N", 00:16:39.506 "method": "bdev_nvme_attach_controller", 00:16:39.506 "req_id": 1 00:16:39.506 } 00:16:39.506 Got JSON-RPC error response 00:16:39.506 response: 00:16:39.506 { 00:16:39.506 "code": -5, 00:16:39.506 "message": "Input/output error" 00:16:39.506 } 00:16:39.506 23:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73594 00:16:39.506 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73594 ']' 00:16:39.506 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73594 00:16:39.506 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73594 00:16:39.507 killing process with pid 73594 00:16:39.507 Received shutdown signal, test time was about 10.000000 seconds 00:16:39.507 00:16:39.507 Latency(us) 00:16:39.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.507 =================================================================================================================== 00:16:39.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73594' 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73594 00:16:39.507 [2024-07-24 23:17:01.989619] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:39.507 23:17:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73594 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73622 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73622 /var/tmp/bdevperf.sock 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73622 ']' 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.765 23:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.023 [2024-07-24 23:17:02.269153] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:40.023 [2024-07-24 23:17:02.269475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73622 ] 00:16:40.023 [2024-07-24 23:17:02.400442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.281 [2024-07-24 23:17:02.517258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.281 [2024-07-24 23:17:02.572815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:40.848 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.848 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:40.848 23:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:41.106 [2024-07-24 23:17:03.428374] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:41.106 [2024-07-24 23:17:03.430329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee0c00 (9): Bad file descriptor 00:16:41.106 [2024-07-24 23:17:03.431323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:41.106 [2024-07-24 23:17:03.431348] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:41.106 [2024-07-24 23:17:03.431363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:41.106 request: 00:16:41.106 { 00:16:41.106 "name": "TLSTEST", 00:16:41.106 "trtype": "tcp", 00:16:41.106 "traddr": "10.0.0.2", 00:16:41.106 "adrfam": "ipv4", 00:16:41.106 "trsvcid": "4420", 00:16:41.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.106 "prchk_reftag": false, 00:16:41.106 "prchk_guard": false, 00:16:41.106 "hdgst": false, 00:16:41.106 "ddgst": false, 00:16:41.106 "method": "bdev_nvme_attach_controller", 00:16:41.106 "req_id": 1 00:16:41.106 } 00:16:41.106 Got JSON-RPC error response 00:16:41.106 response: 00:16:41.106 { 00:16:41.106 "code": -5, 00:16:41.107 "message": "Input/output error" 00:16:41.107 } 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73622 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73622 ']' 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73622 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73622 00:16:41.107 killing process with pid 73622 00:16:41.107 Received shutdown signal, test time was about 10.000000 seconds 00:16:41.107 00:16:41.107 Latency(us) 00:16:41.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.107 =================================================================================================================== 00:16:41.107 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73622' 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73622 00:16:41.107 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73622 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 73168 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73168 ']' 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73168 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73168 00:16:41.365 killing process with pid 73168 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73168' 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73168 00:16:41.365 [2024-07-24 23:17:03.726563] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:41.365 23:17:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73168 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:41.624 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wEpgwBz0EC 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wEpgwBz0EC 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73665 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73665 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:41.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73665 ']' 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.883 23:17:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.883 [2024-07-24 23:17:04.186945] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:41.883 [2024-07-24 23:17:04.187038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.883 [2024-07-24 23:17:04.327512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.142 [2024-07-24 23:17:04.466232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.142 [2024-07-24 23:17:04.466297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.142 [2024-07-24 23:17:04.466308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.142 [2024-07-24 23:17:04.466317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.142 [2024-07-24 23:17:04.466323] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.142 [2024-07-24 23:17:04.466350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.142 [2024-07-24 23:17:04.543846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:42.710 23:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.710 23:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:42.710 23:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.710 23:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.710 23:17:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.968 23:17:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.968 23:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wEpgwBz0EC 00:16:42.968 23:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wEpgwBz0EC 00:16:42.968 23:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:42.968 [2024-07-24 23:17:05.446934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.232 23:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:43.505 23:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:43.505 [2024-07-24 23:17:05.979083] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:43.505 [2024-07-24 23:17:05.979399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.763 23:17:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:44.021 malloc0 00:16:44.021 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:44.279 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:16:44.279 [2024-07-24 23:17:06.750571] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wEpgwBz0EC 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wEpgwBz0EC' 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73714 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73714 /var/tmp/bdevperf.sock 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73714 ']' 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.538 23:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.539 23:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.539 23:17:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:44.539 [2024-07-24 23:17:06.817139] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:44.539 [2024-07-24 23:17:06.817528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73714 ] 00:16:44.539 [2024-07-24 23:17:06.953258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.797 [2024-07-24 23:17:07.086580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.797 [2024-07-24 23:17:07.144209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:45.364 23:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.364 23:17:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:45.364 23:17:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:16:45.622 [2024-07-24 23:17:07.960027] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:45.622 [2024-07-24 23:17:07.960231] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:45.622 TLSTESTn1 00:16:45.622 23:17:08 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:45.879 Running I/O for 10 seconds... 00:16:55.918 00:16:55.918 Latency(us) 00:16:55.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.918 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:55.918 Verification LBA range: start 0x0 length 0x2000 00:16:55.918 TLSTESTn1 : 10.02 3911.19 15.28 0.00 0.00 32661.01 8281.37 33363.78 00:16:55.918 =================================================================================================================== 00:16:55.918 Total : 3911.19 15.28 0.00 0.00 32661.01 8281.37 33363.78 00:16:55.918 0 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73714 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73714 ']' 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73714 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73714 00:16:55.918 killing process with pid 73714 00:16:55.918 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.918 00:16:55.918 Latency(us) 00:16:55.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.918 =================================================================================================================== 00:16:55.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73714' 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73714 00:16:55.918 [2024-07-24 23:17:18.223460] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:55.918 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73714 00:16:56.177 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wEpgwBz0EC 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wEpgwBz0EC 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wEpgwBz0EC 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wEpgwBz0EC 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wEpgwBz0EC' 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73850 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73850 /var/tmp/bdevperf.sock 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73850 ']' 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.178 23:17:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.178 [2024-07-24 23:17:18.510825] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:56.178 [2024-07-24 23:17:18.511115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73850 ] 00:16:56.178 [2024-07-24 23:17:18.647376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.436 [2024-07-24 23:17:18.770054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.436 [2024-07-24 23:17:18.824686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:16:57.377 [2024-07-24 23:17:19.764467] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:57.377 [2024-07-24 23:17:19.764580] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:57.377 [2024-07-24 23:17:19.764593] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wEpgwBz0EC 00:16:57.377 request: 00:16:57.377 { 00:16:57.377 "name": "TLSTEST", 00:16:57.377 "trtype": "tcp", 00:16:57.377 "traddr": "10.0.0.2", 00:16:57.377 "adrfam": "ipv4", 00:16:57.377 "trsvcid": "4420", 00:16:57.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.377 "prchk_reftag": false, 00:16:57.377 "prchk_guard": false, 00:16:57.377 "hdgst": false, 00:16:57.377 "ddgst": false, 00:16:57.377 "psk": "/tmp/tmp.wEpgwBz0EC", 00:16:57.377 "method": "bdev_nvme_attach_controller", 00:16:57.377 "req_id": 1 00:16:57.377 } 00:16:57.377 Got JSON-RPC error response 00:16:57.377 response: 00:16:57.377 { 00:16:57.377 "code": -1, 00:16:57.377 "message": "Operation not permitted" 00:16:57.377 } 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73850 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73850 ']' 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73850 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73850 00:16:57.377 killing process with pid 73850 00:16:57.377 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.377 00:16:57.377 Latency(us) 00:16:57.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.377 =================================================================================================================== 00:16:57.377 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73850' 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73850 00:16:57.377 23:17:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73850 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73665 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73665 ']' 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73665 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73665 00:16:57.647 killing process with pid 73665 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73665' 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73665 00:16:57.647 [2024-07-24 23:17:20.067955] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:57.647 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73665 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73888 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73888 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73888 ']' 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.215 23:17:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.215 [2024-07-24 23:17:20.464477] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:16:58.215 [2024-07-24 23:17:20.464901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.215 [2024-07-24 23:17:20.599924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.474 [2024-07-24 23:17:20.703317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.474 [2024-07-24 23:17:20.703918] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.474 [2024-07-24 23:17:20.704250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.474 [2024-07-24 23:17:20.704658] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.474 [2024-07-24 23:17:20.704734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.474 [2024-07-24 23:17:20.704932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.474 [2024-07-24 23:17:20.779940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wEpgwBz0EC 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wEpgwBz0EC 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.wEpgwBz0EC 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wEpgwBz0EC 00:16:59.041 23:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:59.300 [2024-07-24 23:17:21.653118] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.300 23:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:59.557 23:17:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:59.815 [2024-07-24 23:17:22.137265] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:59.815 [2024-07-24 23:17:22.137560] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.815 23:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:00.072 malloc0 00:17:00.072 23:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:00.330 23:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:17:00.590 [2024-07-24 23:17:22.828920] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:00.590 [2024-07-24 23:17:22.828968] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:00.590 [2024-07-24 23:17:22.829016] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:00.590 request: 00:17:00.590 { 00:17:00.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.590 "host": "nqn.2016-06.io.spdk:host1", 00:17:00.590 "psk": "/tmp/tmp.wEpgwBz0EC", 00:17:00.590 "method": "nvmf_subsystem_add_host", 00:17:00.590 "req_id": 1 00:17:00.590 } 00:17:00.590 Got JSON-RPC error response 00:17:00.590 response: 00:17:00.590 { 00:17:00.590 "code": -32603, 00:17:00.590 "message": "Internal error" 00:17:00.590 } 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73888 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73888 ']' 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73888 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73888 00:17:00.590 killing process with pid 73888 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73888' 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73888 00:17:00.590 23:17:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73888 00:17:00.848 23:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wEpgwBz0EC 00:17:00.848 23:17:23 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73945 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73945 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73945 ']' 00:17:00.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.849 23:17:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.849 [2024-07-24 23:17:23.271244] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:00.849 [2024-07-24 23:17:23.271649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.107 [2024-07-24 23:17:23.408537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.107 [2024-07-24 23:17:23.518369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.107 [2024-07-24 23:17:23.518732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.107 [2024-07-24 23:17:23.518752] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.107 [2024-07-24 23:17:23.518761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.107 [2024-07-24 23:17:23.518768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.107 [2024-07-24 23:17:23.518798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.364 [2024-07-24 23:17:23.597929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wEpgwBz0EC 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wEpgwBz0EC 00:17:01.931 23:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.190 [2024-07-24 23:17:24.469344] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.190 23:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:02.448 23:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:02.706 [2024-07-24 23:17:24.961443] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:02.706 [2024-07-24 23:17:24.961688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.706 23:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:02.964 malloc0 00:17:02.964 23:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:03.222 23:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:17:03.223 [2024-07-24 23:17:25.649428] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=74000 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 74000 /var/tmp/bdevperf.sock 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74000 ']' 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.223 23:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.481 [2024-07-24 23:17:25.713396] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:03.481 [2024-07-24 23:17:25.713752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74000 ] 00:17:03.481 [2024-07-24 23:17:25.851766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.739 [2024-07-24 23:17:25.986344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.739 [2024-07-24 23:17:26.042467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:04.305 23:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.305 23:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:04.305 23:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:17:04.562 [2024-07-24 23:17:26.958731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:04.562 [2024-07-24 23:17:26.958873] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:04.562 TLSTESTn1 00:17:04.820 23:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:05.078 23:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:05.078 "subsystems": [ 00:17:05.078 { 00:17:05.078 "subsystem": "keyring", 00:17:05.078 "config": [] 00:17:05.078 }, 00:17:05.078 { 00:17:05.078 "subsystem": "iobuf", 00:17:05.078 "config": [ 00:17:05.078 { 00:17:05.078 "method": "iobuf_set_options", 00:17:05.079 "params": { 00:17:05.079 "small_pool_count": 8192, 00:17:05.079 "large_pool_count": 1024, 00:17:05.079 "small_bufsize": 8192, 00:17:05.079 "large_bufsize": 135168 00:17:05.079 } 00:17:05.079 } 00:17:05.079 ] 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "subsystem": "sock", 00:17:05.079 "config": [ 00:17:05.079 { 00:17:05.079 "method": "sock_set_default_impl", 00:17:05.079 "params": { 00:17:05.079 "impl_name": "uring" 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "sock_impl_set_options", 00:17:05.079 "params": { 00:17:05.079 "impl_name": "ssl", 00:17:05.079 "recv_buf_size": 4096, 00:17:05.079 "send_buf_size": 4096, 00:17:05.079 "enable_recv_pipe": true, 00:17:05.079 "enable_quickack": false, 00:17:05.079 "enable_placement_id": 0, 00:17:05.079 "enable_zerocopy_send_server": true, 00:17:05.079 "enable_zerocopy_send_client": false, 00:17:05.079 "zerocopy_threshold": 0, 00:17:05.079 "tls_version": 0, 00:17:05.079 "enable_ktls": false 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "sock_impl_set_options", 00:17:05.079 "params": { 00:17:05.079 "impl_name": "posix", 00:17:05.079 "recv_buf_size": 2097152, 00:17:05.079 "send_buf_size": 2097152, 00:17:05.079 "enable_recv_pipe": true, 00:17:05.079 "enable_quickack": false, 00:17:05.079 "enable_placement_id": 0, 00:17:05.079 "enable_zerocopy_send_server": true, 00:17:05.079 "enable_zerocopy_send_client": false, 00:17:05.079 "zerocopy_threshold": 0, 00:17:05.079 "tls_version": 0, 00:17:05.079 "enable_ktls": false 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "sock_impl_set_options", 00:17:05.079 "params": { 00:17:05.079 "impl_name": "uring", 00:17:05.079 "recv_buf_size": 2097152, 00:17:05.079 "send_buf_size": 2097152, 00:17:05.079 "enable_recv_pipe": true, 00:17:05.079 "enable_quickack": false, 00:17:05.079 "enable_placement_id": 0, 00:17:05.079 "enable_zerocopy_send_server": false, 00:17:05.079 "enable_zerocopy_send_client": false, 00:17:05.079 "zerocopy_threshold": 0, 00:17:05.079 "tls_version": 0, 00:17:05.079 "enable_ktls": false 00:17:05.079 } 00:17:05.079 } 00:17:05.079 ] 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "subsystem": "vmd", 00:17:05.079 "config": [] 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "subsystem": "accel", 00:17:05.079 "config": [ 00:17:05.079 { 00:17:05.079 "method": "accel_set_options", 00:17:05.079 "params": { 00:17:05.079 "small_cache_size": 128, 00:17:05.079 "large_cache_size": 16, 00:17:05.079 "task_count": 2048, 00:17:05.079 "sequence_count": 2048, 00:17:05.079 "buf_count": 2048 00:17:05.079 } 00:17:05.079 } 00:17:05.079 ] 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "subsystem": "bdev", 00:17:05.079 "config": [ 00:17:05.079 { 00:17:05.079 "method": "bdev_set_options", 00:17:05.079 "params": { 00:17:05.079 "bdev_io_pool_size": 65535, 00:17:05.079 "bdev_io_cache_size": 256, 00:17:05.079 "bdev_auto_examine": true, 00:17:05.079 "iobuf_small_cache_size": 128, 00:17:05.079 "iobuf_large_cache_size": 16 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "bdev_raid_set_options", 00:17:05.079 "params": { 00:17:05.079 "process_window_size_kb": 1024 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "bdev_iscsi_set_options", 00:17:05.079 "params": { 00:17:05.079 "timeout_sec": 30 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "bdev_nvme_set_options", 00:17:05.079 "params": { 00:17:05.079 "action_on_timeout": "none", 00:17:05.079 "timeout_us": 0, 00:17:05.079 "timeout_admin_us": 0, 00:17:05.079 "keep_alive_timeout_ms": 10000, 00:17:05.079 "arbitration_burst": 0, 00:17:05.079 "low_priority_weight": 0, 00:17:05.079 "medium_priority_weight": 0, 00:17:05.079 "high_priority_weight": 0, 00:17:05.079 "nvme_adminq_poll_period_us": 10000, 00:17:05.079 "nvme_ioq_poll_period_us": 0, 00:17:05.079 "io_queue_requests": 0, 00:17:05.079 "delay_cmd_submit": true, 00:17:05.079 "transport_retry_count": 4, 00:17:05.079 "bdev_retry_count": 3, 00:17:05.079 "transport_ack_timeout": 0, 00:17:05.079 "ctrlr_loss_timeout_sec": 0, 00:17:05.079 "reconnect_delay_sec": 0, 00:17:05.079 "fast_io_fail_timeout_sec": 0, 00:17:05.079 "disable_auto_failback": false, 00:17:05.079 "generate_uuids": false, 00:17:05.079 "transport_tos": 0, 00:17:05.079 "nvme_error_stat": false, 00:17:05.079 "rdma_srq_size": 0, 00:17:05.079 "io_path_stat": false, 00:17:05.079 "allow_accel_sequence": false, 00:17:05.079 "rdma_max_cq_size": 0, 00:17:05.079 "rdma_cm_event_timeout_ms": 0, 00:17:05.079 "dhchap_digests": [ 00:17:05.079 "sha256", 00:17:05.079 "sha384", 00:17:05.079 "sha512" 00:17:05.079 ], 00:17:05.079 "dhchap_dhgroups": [ 00:17:05.079 "null", 00:17:05.079 "ffdhe2048", 00:17:05.079 "ffdhe3072", 00:17:05.079 "ffdhe4096", 00:17:05.079 "ffdhe6144", 00:17:05.079 "ffdhe8192" 00:17:05.079 ] 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "bdev_nvme_set_hotplug", 00:17:05.079 "params": { 00:17:05.079 "period_us": 100000, 00:17:05.079 "enable": false 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "bdev_malloc_create", 00:17:05.079 "params": { 00:17:05.079 "name": "malloc0", 00:17:05.079 "num_blocks": 8192, 00:17:05.079 "block_size": 4096, 00:17:05.079 "physical_block_size": 4096, 00:17:05.079 "uuid": "dc0fb0b2-8ac9-4615-89ab-5ae12093986c", 00:17:05.079 "optimal_io_boundary": 0 00:17:05.079 } 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "method": "bdev_wait_for_examine" 00:17:05.079 } 00:17:05.079 ] 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "subsystem": "nbd", 00:17:05.079 "config": [] 00:17:05.079 }, 00:17:05.079 { 00:17:05.079 "subsystem": "scheduler", 00:17:05.079 "config": [ 00:17:05.079 { 00:17:05.079 "method": "framework_set_scheduler", 00:17:05.079 "params": { 00:17:05.079 "name": "static" 00:17:05.079 } 00:17:05.079 } 00:17:05.079 ] 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "subsystem": "nvmf", 00:17:05.080 "config": [ 00:17:05.080 { 00:17:05.080 "method": "nvmf_set_config", 00:17:05.080 "params": { 00:17:05.080 "discovery_filter": "match_any", 00:17:05.080 "admin_cmd_passthru": { 00:17:05.080 "identify_ctrlr": false 00:17:05.080 } 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_set_max_subsystems", 00:17:05.080 "params": { 00:17:05.080 "max_subsystems": 1024 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_set_crdt", 00:17:05.080 "params": { 00:17:05.080 "crdt1": 0, 00:17:05.080 "crdt2": 0, 00:17:05.080 "crdt3": 0 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_create_transport", 00:17:05.080 "params": { 00:17:05.080 "trtype": "TCP", 00:17:05.080 "max_queue_depth": 128, 00:17:05.080 "max_io_qpairs_per_ctrlr": 127, 00:17:05.080 "in_capsule_data_size": 4096, 00:17:05.080 "max_io_size": 131072, 00:17:05.080 "io_unit_size": 131072, 00:17:05.080 "max_aq_depth": 128, 00:17:05.080 "num_shared_buffers": 511, 00:17:05.080 "buf_cache_size": 4294967295, 00:17:05.080 "dif_insert_or_strip": false, 00:17:05.080 "zcopy": false, 00:17:05.080 "c2h_success": false, 00:17:05.080 "sock_priority": 0, 00:17:05.080 "abort_timeout_sec": 1, 00:17:05.080 "ack_timeout": 0, 00:17:05.080 "data_wr_pool_size": 0 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_create_subsystem", 00:17:05.080 "params": { 00:17:05.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.080 "allow_any_host": false, 00:17:05.080 "serial_number": "SPDK00000000000001", 00:17:05.080 "model_number": "SPDK bdev Controller", 00:17:05.080 "max_namespaces": 10, 00:17:05.080 "min_cntlid": 1, 00:17:05.080 "max_cntlid": 65519, 00:17:05.080 "ana_reporting": false 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_subsystem_add_host", 00:17:05.080 "params": { 00:17:05.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.080 "host": "nqn.2016-06.io.spdk:host1", 00:17:05.080 "psk": "/tmp/tmp.wEpgwBz0EC" 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_subsystem_add_ns", 00:17:05.080 "params": { 00:17:05.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.080 "namespace": { 00:17:05.080 "nsid": 1, 00:17:05.080 "bdev_name": "malloc0", 00:17:05.080 "nguid": "DC0FB0B28AC9461589AB5AE12093986C", 00:17:05.080 "uuid": "dc0fb0b2-8ac9-4615-89ab-5ae12093986c", 00:17:05.080 "no_auto_visible": false 00:17:05.080 } 00:17:05.080 } 00:17:05.080 }, 00:17:05.080 { 00:17:05.080 "method": "nvmf_subsystem_add_listener", 00:17:05.080 "params": { 00:17:05.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.080 "listen_address": { 00:17:05.080 "trtype": "TCP", 00:17:05.080 "adrfam": "IPv4", 00:17:05.080 "traddr": "10.0.0.2", 00:17:05.080 "trsvcid": "4420" 00:17:05.080 }, 00:17:05.080 "secure_channel": true 00:17:05.080 } 00:17:05.080 } 00:17:05.080 ] 00:17:05.080 } 00:17:05.080 ] 00:17:05.080 }' 00:17:05.080 23:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:05.337 23:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:05.337 "subsystems": [ 00:17:05.337 { 00:17:05.337 "subsystem": "keyring", 00:17:05.337 "config": [] 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "subsystem": "iobuf", 00:17:05.337 "config": [ 00:17:05.337 { 00:17:05.337 "method": "iobuf_set_options", 00:17:05.337 "params": { 00:17:05.337 "small_pool_count": 8192, 00:17:05.337 "large_pool_count": 1024, 00:17:05.337 "small_bufsize": 8192, 00:17:05.337 "large_bufsize": 135168 00:17:05.337 } 00:17:05.337 } 00:17:05.337 ] 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "subsystem": "sock", 00:17:05.337 "config": [ 00:17:05.337 { 00:17:05.337 "method": "sock_set_default_impl", 00:17:05.337 "params": { 00:17:05.337 "impl_name": "uring" 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "sock_impl_set_options", 00:17:05.337 "params": { 00:17:05.337 "impl_name": "ssl", 00:17:05.337 "recv_buf_size": 4096, 00:17:05.337 "send_buf_size": 4096, 00:17:05.337 "enable_recv_pipe": true, 00:17:05.337 "enable_quickack": false, 00:17:05.337 "enable_placement_id": 0, 00:17:05.337 "enable_zerocopy_send_server": true, 00:17:05.337 "enable_zerocopy_send_client": false, 00:17:05.337 "zerocopy_threshold": 0, 00:17:05.337 "tls_version": 0, 00:17:05.337 "enable_ktls": false 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "sock_impl_set_options", 00:17:05.337 "params": { 00:17:05.337 "impl_name": "posix", 00:17:05.337 "recv_buf_size": 2097152, 00:17:05.337 "send_buf_size": 2097152, 00:17:05.337 "enable_recv_pipe": true, 00:17:05.337 "enable_quickack": false, 00:17:05.337 "enable_placement_id": 0, 00:17:05.337 "enable_zerocopy_send_server": true, 00:17:05.337 "enable_zerocopy_send_client": false, 00:17:05.337 "zerocopy_threshold": 0, 00:17:05.337 "tls_version": 0, 00:17:05.337 "enable_ktls": false 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "sock_impl_set_options", 00:17:05.337 "params": { 00:17:05.337 "impl_name": "uring", 00:17:05.337 "recv_buf_size": 2097152, 00:17:05.337 "send_buf_size": 2097152, 00:17:05.337 "enable_recv_pipe": true, 00:17:05.337 "enable_quickack": false, 00:17:05.337 "enable_placement_id": 0, 00:17:05.337 "enable_zerocopy_send_server": false, 00:17:05.337 "enable_zerocopy_send_client": false, 00:17:05.337 "zerocopy_threshold": 0, 00:17:05.337 "tls_version": 0, 00:17:05.337 "enable_ktls": false 00:17:05.337 } 00:17:05.337 } 00:17:05.337 ] 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "subsystem": "vmd", 00:17:05.337 "config": [] 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "subsystem": "accel", 00:17:05.337 "config": [ 00:17:05.337 { 00:17:05.337 "method": "accel_set_options", 00:17:05.337 "params": { 00:17:05.337 "small_cache_size": 128, 00:17:05.337 "large_cache_size": 16, 00:17:05.337 "task_count": 2048, 00:17:05.337 "sequence_count": 2048, 00:17:05.337 "buf_count": 2048 00:17:05.337 } 00:17:05.337 } 00:17:05.337 ] 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "subsystem": "bdev", 00:17:05.337 "config": [ 00:17:05.337 { 00:17:05.337 "method": "bdev_set_options", 00:17:05.337 "params": { 00:17:05.337 "bdev_io_pool_size": 65535, 00:17:05.337 "bdev_io_cache_size": 256, 00:17:05.337 "bdev_auto_examine": true, 00:17:05.337 "iobuf_small_cache_size": 128, 00:17:05.337 "iobuf_large_cache_size": 16 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "bdev_raid_set_options", 00:17:05.337 "params": { 00:17:05.337 "process_window_size_kb": 1024 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "bdev_iscsi_set_options", 00:17:05.337 "params": { 00:17:05.337 "timeout_sec": 30 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "bdev_nvme_set_options", 00:17:05.337 "params": { 00:17:05.337 "action_on_timeout": "none", 00:17:05.337 "timeout_us": 0, 00:17:05.337 "timeout_admin_us": 0, 00:17:05.337 "keep_alive_timeout_ms": 10000, 00:17:05.337 "arbitration_burst": 0, 00:17:05.337 "low_priority_weight": 0, 00:17:05.337 "medium_priority_weight": 0, 00:17:05.337 "high_priority_weight": 0, 00:17:05.337 "nvme_adminq_poll_period_us": 10000, 00:17:05.337 "nvme_ioq_poll_period_us": 0, 00:17:05.337 "io_queue_requests": 512, 00:17:05.337 "delay_cmd_submit": true, 00:17:05.337 "transport_retry_count": 4, 00:17:05.337 "bdev_retry_count": 3, 00:17:05.337 "transport_ack_timeout": 0, 00:17:05.337 "ctrlr_loss_timeout_sec": 0, 00:17:05.337 "reconnect_delay_sec": 0, 00:17:05.337 "fast_io_fail_timeout_sec": 0, 00:17:05.337 "disable_auto_failback": false, 00:17:05.337 "generate_uuids": false, 00:17:05.337 "transport_tos": 0, 00:17:05.337 "nvme_error_stat": false, 00:17:05.337 "rdma_srq_size": 0, 00:17:05.337 "io_path_stat": false, 00:17:05.337 "allow_accel_sequence": false, 00:17:05.337 "rdma_max_cq_size": 0, 00:17:05.337 "rdma_cm_event_timeout_ms": 0, 00:17:05.337 "dhchap_digests": [ 00:17:05.337 "sha256", 00:17:05.337 "sha384", 00:17:05.337 "sha512" 00:17:05.337 ], 00:17:05.337 "dhchap_dhgroups": [ 00:17:05.337 "null", 00:17:05.337 "ffdhe2048", 00:17:05.337 "ffdhe3072", 00:17:05.337 "ffdhe4096", 00:17:05.337 "ffdhe6144", 00:17:05.337 "ffdhe8192" 00:17:05.337 ] 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "bdev_nvme_attach_controller", 00:17:05.337 "params": { 00:17:05.337 "name": "TLSTEST", 00:17:05.337 "trtype": "TCP", 00:17:05.337 "adrfam": "IPv4", 00:17:05.337 "traddr": "10.0.0.2", 00:17:05.337 "trsvcid": "4420", 00:17:05.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.337 "prchk_reftag": false, 00:17:05.337 "prchk_guard": false, 00:17:05.337 "ctrlr_loss_timeout_sec": 0, 00:17:05.337 "reconnect_delay_sec": 0, 00:17:05.337 "fast_io_fail_timeout_sec": 0, 00:17:05.337 "psk": "/tmp/tmp.wEpgwBz0EC", 00:17:05.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.337 "hdgst": false, 00:17:05.337 "ddgst": false 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "bdev_nvme_set_hotplug", 00:17:05.337 "params": { 00:17:05.337 "period_us": 100000, 00:17:05.337 "enable": false 00:17:05.337 } 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "method": "bdev_wait_for_examine" 00:17:05.337 } 00:17:05.337 ] 00:17:05.337 }, 00:17:05.337 { 00:17:05.337 "subsystem": "nbd", 00:17:05.338 "config": [] 00:17:05.338 } 00:17:05.338 ] 00:17:05.338 }' 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 74000 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74000 ']' 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74000 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74000 00:17:05.338 killing process with pid 74000 00:17:05.338 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.338 00:17:05.338 Latency(us) 00:17:05.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.338 =================================================================================================================== 00:17:05.338 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74000' 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74000 00:17:05.338 [2024-07-24 23:17:27.690401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:05.338 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74000 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73945 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73945 ']' 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73945 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73945 00:17:05.595 killing process with pid 73945 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73945' 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73945 00:17:05.595 [2024-07-24 23:17:27.933226] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:05.595 23:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73945 00:17:05.854 23:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:05.854 23:17:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.854 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.854 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.854 23:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:05.854 "subsystems": [ 00:17:05.854 { 00:17:05.854 "subsystem": "keyring", 00:17:05.854 "config": [] 00:17:05.854 }, 00:17:05.854 { 00:17:05.854 "subsystem": "iobuf", 00:17:05.854 "config": [ 00:17:05.854 { 00:17:05.854 "method": "iobuf_set_options", 00:17:05.855 "params": { 00:17:05.855 "small_pool_count": 8192, 00:17:05.855 "large_pool_count": 1024, 00:17:05.855 "small_bufsize": 8192, 00:17:05.855 "large_bufsize": 135168 00:17:05.855 } 00:17:05.855 } 00:17:05.855 ] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "sock", 00:17:05.855 "config": [ 00:17:05.855 { 00:17:05.855 "method": "sock_set_default_impl", 00:17:05.855 "params": { 00:17:05.855 "impl_name": "uring" 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "sock_impl_set_options", 00:17:05.855 "params": { 00:17:05.855 "impl_name": "ssl", 00:17:05.855 "recv_buf_size": 4096, 00:17:05.855 "send_buf_size": 4096, 00:17:05.855 "enable_recv_pipe": true, 00:17:05.855 "enable_quickack": false, 00:17:05.855 "enable_placement_id": 0, 00:17:05.855 "enable_zerocopy_send_server": true, 00:17:05.855 "enable_zerocopy_send_client": false, 00:17:05.855 "zerocopy_threshold": 0, 00:17:05.855 "tls_version": 0, 00:17:05.855 "enable_ktls": false 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "sock_impl_set_options", 00:17:05.855 "params": { 00:17:05.855 "impl_name": "posix", 00:17:05.855 "recv_buf_size": 2097152, 00:17:05.855 "send_buf_size": 2097152, 00:17:05.855 "enable_recv_pipe": true, 00:17:05.855 "enable_quickack": false, 00:17:05.855 "enable_placement_id": 0, 00:17:05.855 "enable_zerocopy_send_server": true, 00:17:05.855 "enable_zerocopy_send_client": false, 00:17:05.855 "zerocopy_threshold": 0, 00:17:05.855 "tls_version": 0, 00:17:05.855 "enable_ktls": false 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "sock_impl_set_options", 00:17:05.855 "params": { 00:17:05.855 "impl_name": "uring", 00:17:05.855 "recv_buf_size": 2097152, 00:17:05.855 "send_buf_size": 2097152, 00:17:05.855 "enable_recv_pipe": true, 00:17:05.855 "enable_quickack": false, 00:17:05.855 "enable_placement_id": 0, 00:17:05.855 "enable_zerocopy_send_server": false, 00:17:05.855 "enable_zerocopy_send_client": false, 00:17:05.855 "zerocopy_threshold": 0, 00:17:05.855 "tls_version": 0, 00:17:05.855 "enable_ktls": false 00:17:05.855 } 00:17:05.855 } 00:17:05.855 ] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "vmd", 00:17:05.855 "config": [] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "accel", 00:17:05.855 "config": [ 00:17:05.855 { 00:17:05.855 "method": "accel_set_options", 00:17:05.855 "params": { 00:17:05.855 "small_cache_size": 128, 00:17:05.855 "large_cache_size": 16, 00:17:05.855 "task_count": 2048, 00:17:05.855 "sequence_count": 2048, 00:17:05.855 "buf_count": 2048 00:17:05.855 } 00:17:05.855 } 00:17:05.855 ] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "bdev", 00:17:05.855 "config": [ 00:17:05.855 { 00:17:05.855 "method": "bdev_set_options", 00:17:05.855 "params": { 00:17:05.855 "bdev_io_pool_size": 65535, 00:17:05.855 "bdev_io_cache_size": 256, 00:17:05.855 "bdev_auto_examine": true, 00:17:05.855 "iobuf_small_cache_size": 128, 00:17:05.855 "iobuf_large_cache_size": 16 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "bdev_raid_set_options", 00:17:05.855 "params": { 00:17:05.855 "process_window_size_kb": 1024 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "bdev_iscsi_set_options", 00:17:05.855 "params": { 00:17:05.855 "timeout_sec": 30 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "bdev_nvme_set_options", 00:17:05.855 "params": { 00:17:05.855 "action_on_timeout": "none", 00:17:05.855 "timeout_us": 0, 00:17:05.855 "timeout_admin_us": 0, 00:17:05.855 "keep_alive_timeout_ms": 10000, 00:17:05.855 "arbitration_burst": 0, 00:17:05.855 "low_priority_weight": 0, 00:17:05.855 "medium_priority_weight": 0, 00:17:05.855 "high_priority_weight": 0, 00:17:05.855 "nvme_adminq_poll_period_us": 10000, 00:17:05.855 "nvme_ioq_poll_period_us": 0, 00:17:05.855 "io_queue_requests": 0, 00:17:05.855 "delay_cmd_submit": true, 00:17:05.855 "transport_retry_count": 4, 00:17:05.855 "bdev_retry_count": 3, 00:17:05.855 "transport_ack_timeout": 0, 00:17:05.855 "ctrlr_loss_timeout_sec": 0, 00:17:05.855 "reconnect_delay_sec": 0, 00:17:05.855 "fast_io_fail_timeout_sec": 0, 00:17:05.855 "disable_auto_failback": false, 00:17:05.855 "generate_uuids": false, 00:17:05.855 "transport_tos": 0, 00:17:05.855 "nvme_error_stat": false, 00:17:05.855 "rdma_srq_size": 0, 00:17:05.855 "io_path_stat": false, 00:17:05.855 "allow_accel_sequence": false, 00:17:05.855 "rdma_max_cq_size": 0, 00:17:05.855 "rdma_cm_event_timeout_ms": 0, 00:17:05.855 "dhchap_digests": [ 00:17:05.855 "sha256", 00:17:05.855 "sha384", 00:17:05.855 "sha512" 00:17:05.855 ], 00:17:05.855 "dhchap_dhgroups": [ 00:17:05.855 "null", 00:17:05.855 "ffdhe2048", 00:17:05.855 "ffdhe3072", 00:17:05.855 "ffdhe4096", 00:17:05.855 "ffdhe6144", 00:17:05.855 "ffdhe8192" 00:17:05.855 ] 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "bdev_nvme_set_hotplug", 00:17:05.855 "params": { 00:17:05.855 "period_us": 100000, 00:17:05.855 "enable": false 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "bdev_malloc_create", 00:17:05.855 "params": { 00:17:05.855 "name": "malloc0", 00:17:05.855 "num_blocks": 8192, 00:17:05.855 "block_size": 4096, 00:17:05.855 "physical_block_size": 4096, 00:17:05.855 "uuid": "dc0fb0b2-8ac9-4615-89ab-5ae12093986c", 00:17:05.855 "optimal_io_boundary": 0 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "bdev_wait_for_examine" 00:17:05.855 } 00:17:05.855 ] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "nbd", 00:17:05.855 "config": [] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "scheduler", 00:17:05.855 "config": [ 00:17:05.855 { 00:17:05.855 "method": "framework_set_scheduler", 00:17:05.855 "params": { 00:17:05.855 "name": "static" 00:17:05.855 } 00:17:05.855 } 00:17:05.855 ] 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "subsystem": "nvmf", 00:17:05.855 "config": [ 00:17:05.855 { 00:17:05.855 "method": "nvmf_set_config", 00:17:05.855 "params": { 00:17:05.855 "discovery_filter": "match_any", 00:17:05.855 "admin_cmd_passthru": { 00:17:05.855 "identify_ctrlr": false 00:17:05.855 } 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "nvmf_set_max_subsystems", 00:17:05.855 "params": { 00:17:05.855 "max_subsystems": 1024 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "nvmf_set_crdt", 00:17:05.855 "params": { 00:17:05.855 "crdt1": 0, 00:17:05.855 "crdt2": 0, 00:17:05.855 "crdt3": 0 00:17:05.855 } 00:17:05.855 }, 00:17:05.855 { 00:17:05.855 "method": "nvmf_create_transport", 00:17:05.855 "params": { 00:17:05.855 "trtype": "TCP", 00:17:05.855 "max_queue_depth": 128, 00:17:05.855 "max_io_qpairs_per_ctrlr": 127, 00:17:05.855 "in_capsule_data_size": 4096, 00:17:05.855 "max_io_size": 131072, 00:17:05.855 "io_unit_size": 131072, 00:17:05.855 "max_aq_depth": 128, 00:17:05.855 "num_shared_buffers": 511, 00:17:05.855 "buf_cache_size": 4294967295, 00:17:05.855 "dif_insert_or_strip": false, 00:17:05.855 "zcopy": false, 00:17:05.856 "c2h_success": false, 00:17:05.856 "sock_priority": 0, 00:17:05.856 "abort_timeout_sec": 1, 00:17:05.856 "ack_timeout": 0, 00:17:05.856 "data_wr_pool_size": 0 00:17:05.856 } 00:17:05.856 }, 00:17:05.856 { 00:17:05.856 "method": "nvmf_create_subsystem", 00:17:05.856 "params": { 00:17:05.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.856 "allow_any_host": false, 00:17:05.856 "serial_number": "SPDK00000000000001", 00:17:05.856 "model_number": "SPDK bdev Controller", 00:17:05.856 "max_namespaces": 10, 00:17:05.856 "min_cntlid": 1, 00:17:05.856 "max_cntlid": 65519, 00:17:05.856 "ana_reporting": false 00:17:05.856 } 00:17:05.856 }, 00:17:05.856 { 00:17:05.856 "method": "nvmf_subsystem_add_host", 00:17:05.856 "params": { 00:17:05.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.856 "host": "nqn.2016-06.io.spdk:host1", 00:17:05.856 "psk": "/tmp/tmp.wEpgwBz0EC" 00:17:05.856 } 00:17:05.856 }, 00:17:05.856 { 00:17:05.856 "method": "nvmf_subsystem_add_ns", 00:17:05.856 "params": { 00:17:05.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.856 "namespace": { 00:17:05.856 "nsid": 1, 00:17:05.856 "bdev_name": "malloc0", 00:17:05.856 "nguid": "DC0FB0B28AC9461589AB5AE12093986C", 00:17:05.856 "uuid": "dc0fb0b2-8ac9-4615-89ab-5ae12093986c", 00:17:05.856 "no_auto_visible": false 00:17:05.856 } 00:17:05.856 } 00:17:05.856 }, 00:17:05.856 { 00:17:05.856 "method": "nvmf_subsystem_add_listener", 00:17:05.856 "params": { 00:17:05.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.856 "listen_address": { 00:17:05.856 "trtype": "TCP", 00:17:05.856 "adrfam": "IPv4", 00:17:05.856 "traddr": "10.0.0.2", 00:17:05.856 "trsvcid": "4420" 00:17:05.856 }, 00:17:05.856 "secure_channel": true 00:17:05.856 } 00:17:05.856 } 00:17:05.856 ] 00:17:05.856 } 00:17:05.856 ] 00:17:05.856 }' 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74047 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74047 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74047 ']' 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.856 23:17:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.856 [2024-07-24 23:17:28.300815] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:05.856 [2024-07-24 23:17:28.300914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.114 [2024-07-24 23:17:28.432251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.114 [2024-07-24 23:17:28.550464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.114 [2024-07-24 23:17:28.550524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.114 [2024-07-24 23:17:28.550535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.114 [2024-07-24 23:17:28.550542] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.114 [2024-07-24 23:17:28.550549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.114 [2024-07-24 23:17:28.550654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.372 [2024-07-24 23:17:28.738645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:06.372 [2024-07-24 23:17:28.826869] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.372 [2024-07-24 23:17:28.842784] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:06.633 [2024-07-24 23:17:28.858838] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:06.633 [2024-07-24 23:17:28.859159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=74075 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 74075 /var/tmp/bdevperf.sock 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74075 ']' 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:06.891 23:17:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:06.891 "subsystems": [ 00:17:06.891 { 00:17:06.891 "subsystem": "keyring", 00:17:06.891 "config": [] 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "subsystem": "iobuf", 00:17:06.891 "config": [ 00:17:06.891 { 00:17:06.891 "method": "iobuf_set_options", 00:17:06.891 "params": { 00:17:06.891 "small_pool_count": 8192, 00:17:06.891 "large_pool_count": 1024, 00:17:06.891 "small_bufsize": 8192, 00:17:06.891 "large_bufsize": 135168 00:17:06.891 } 00:17:06.891 } 00:17:06.891 ] 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "subsystem": "sock", 00:17:06.891 "config": [ 00:17:06.891 { 00:17:06.891 "method": "sock_set_default_impl", 00:17:06.891 "params": { 00:17:06.891 "impl_name": "uring" 00:17:06.891 } 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "method": "sock_impl_set_options", 00:17:06.891 "params": { 00:17:06.891 "impl_name": "ssl", 00:17:06.891 "recv_buf_size": 4096, 00:17:06.891 "send_buf_size": 4096, 00:17:06.891 "enable_recv_pipe": true, 00:17:06.891 "enable_quickack": false, 00:17:06.891 "enable_placement_id": 0, 00:17:06.891 "enable_zerocopy_send_server": true, 00:17:06.891 "enable_zerocopy_send_client": false, 00:17:06.891 "zerocopy_threshold": 0, 00:17:06.891 "tls_version": 0, 00:17:06.891 "enable_ktls": false 00:17:06.891 } 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "method": "sock_impl_set_options", 00:17:06.891 "params": { 00:17:06.891 "impl_name": "posix", 00:17:06.891 "recv_buf_size": 2097152, 00:17:06.891 "send_buf_size": 2097152, 00:17:06.891 "enable_recv_pipe": true, 00:17:06.891 "enable_quickack": false, 00:17:06.891 "enable_placement_id": 0, 00:17:06.891 "enable_zerocopy_send_server": true, 00:17:06.891 "enable_zerocopy_send_client": false, 00:17:06.891 "zerocopy_threshold": 0, 00:17:06.891 "tls_version": 0, 00:17:06.891 "enable_ktls": false 00:17:06.891 } 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "method": "sock_impl_set_options", 00:17:06.891 "params": { 00:17:06.891 "impl_name": "uring", 00:17:06.891 "recv_buf_size": 2097152, 00:17:06.891 "send_buf_size": 2097152, 00:17:06.891 "enable_recv_pipe": true, 00:17:06.891 "enable_quickack": false, 00:17:06.891 "enable_placement_id": 0, 00:17:06.891 "enable_zerocopy_send_server": false, 00:17:06.891 "enable_zerocopy_send_client": false, 00:17:06.891 "zerocopy_threshold": 0, 00:17:06.891 "tls_version": 0, 00:17:06.891 "enable_ktls": false 00:17:06.891 } 00:17:06.891 } 00:17:06.891 ] 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "subsystem": "vmd", 00:17:06.891 "config": [] 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "subsystem": "accel", 00:17:06.891 "config": [ 00:17:06.891 { 00:17:06.891 "method": "accel_set_options", 00:17:06.891 "params": { 00:17:06.891 "small_cache_size": 128, 00:17:06.891 "large_cache_size": 16, 00:17:06.891 "task_count": 2048, 00:17:06.891 "sequence_count": 2048, 00:17:06.891 "buf_count": 2048 00:17:06.891 } 00:17:06.891 } 00:17:06.891 ] 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "subsystem": "bdev", 00:17:06.891 "config": [ 00:17:06.891 { 00:17:06.891 "method": "bdev_set_options", 00:17:06.891 "params": { 00:17:06.891 "bdev_io_pool_size": 65535, 00:17:06.891 "bdev_io_cache_size": 256, 00:17:06.891 "bdev_auto_examine": true, 00:17:06.891 "iobuf_small_cache_size": 128, 00:17:06.891 "iobuf_large_cache_size": 16 00:17:06.891 } 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "method": "bdev_raid_set_options", 00:17:06.891 "params": { 00:17:06.891 "process_window_size_kb": 1024 00:17:06.891 } 00:17:06.891 }, 00:17:06.891 { 00:17:06.891 "method": "bdev_iscsi_set_options", 00:17:06.891 "params": { 00:17:06.891 "timeout_sec": 30 00:17:06.891 } 00:17:06.891 }, 00:17:06.892 { 00:17:06.892 "method": "bdev_nvme_set_options", 00:17:06.892 "params": { 00:17:06.892 "action_on_timeout": "none", 00:17:06.892 "timeout_us": 0, 00:17:06.892 "timeout_admin_us": 0, 00:17:06.892 "keep_alive_timeout_ms": 10000, 00:17:06.892 "arbitration_burst": 0, 00:17:06.892 "low_priority_weight": 0, 00:17:06.892 "medium_priority_weight": 0, 00:17:06.892 "high_priority_weight": 0, 00:17:06.892 "nvme_adminq_poll_period_us": 10000, 00:17:06.892 "nvme_ioq_poll_period_us": 0, 00:17:06.892 "io_queue_requests": 512, 00:17:06.892 "delay_cmd_submit": true, 00:17:06.892 "transport_retry_count": 4, 00:17:06.892 "bdev_retry_count": 3, 00:17:06.892 "transport_ack_timeout": 0, 00:17:06.892 "ctrlr_loss_timeout_sec": 0, 00:17:06.892 "reconnect_delay_sec": 0, 00:17:06.892 "fast_io_fail_timeout_sec": 0, 00:17:06.892 "disable_auto_failback": false, 00:17:06.892 "generate_uuids": false, 00:17:06.892 "transport_tos": 0, 00:17:06.892 "nvme_error_stat": false, 00:17:06.892 "rdma_srq_size": 0, 00:17:06.892 "io_path_stat": false, 00:17:06.892 "allow_accel_sequence": false, 00:17:06.892 "rdma_max_cq_size": 0, 00:17:06.892 "rdma_cm_event_timeout_ms": 0, 00:17:06.892 "dhchap_digests": [ 00:17:06.892 "sha256", 00:17:06.892 "sha384", 00:17:06.892 "sha512" 00:17:06.892 ], 00:17:06.892 "dhchap_dhgroups": [ 00:17:06.892 "null", 00:17:06.892 "ffdhe2048", 00:17:06.892 "ffdhe3072", 00:17:06.892 "ffdhe4096", 00:17:06.892 "ffdhe6144", 00:17:06.892 "ffdhe8192" 00:17:06.892 ] 00:17:06.892 } 00:17:06.892 }, 00:17:06.892 { 00:17:06.892 "method": "bdev_nvme_attach_controller", 00:17:06.892 "params": { 00:17:06.892 "name": "TLSTEST", 00:17:06.892 "trtype": "TCP", 00:17:06.892 "adrfam": "IPv4", 00:17:06.892 "traddr": "10.0.0.2", 00:17:06.892 "trsvcid": "4420", 00:17:06.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:06.892 "prchk_reftag": false, 00:17:06.892 "prchk_guard": false, 00:17:06.892 "ctrlr_loss_timeout_sec": 0, 00:17:06.892 "reconnect_delay_sec": 0, 00:17:06.892 "fast_io_fail_timeout_sec": 0, 00:17:06.892 "psk": "/tmp/tmp.wEpgwBz0EC", 00:17:06.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.892 "hdgst": false, 00:17:06.892 "ddgst": false 00:17:06.892 } 00:17:06.892 }, 00:17:06.892 { 00:17:06.892 "method": "bdev_nvme_set_hotplug", 00:17:06.892 "params": { 00:17:06.892 "period_us": 100000, 00:17:06.892 "enable": false 00:17:06.892 } 00:17:06.892 }, 00:17:06.892 { 00:17:06.892 "method": "bdev_wait_for_examine" 00:17:06.892 } 00:17:06.892 ] 00:17:06.892 }, 00:17:06.892 { 00:17:06.892 "subsystem": "nbd", 00:17:06.892 "config": [] 00:17:06.892 } 00:17:06.892 ] 00:17:06.892 }' 00:17:06.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:06.892 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:06.892 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:06.892 23:17:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.892 [2024-07-24 23:17:29.333348] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:06.892 [2024-07-24 23:17:29.333476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74075 ] 00:17:07.150 [2024-07-24 23:17:29.474085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.150 [2024-07-24 23:17:29.594582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.408 [2024-07-24 23:17:29.728210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:07.408 [2024-07-24 23:17:29.766815] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.408 [2024-07-24 23:17:29.766947] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:07.975 23:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:07.975 23:17:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:07.975 23:17:30 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:07.975 Running I/O for 10 seconds... 00:17:17.996 00:17:17.996 Latency(us) 00:17:17.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.996 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:17.996 Verification LBA range: start 0x0 length 0x2000 00:17:17.996 TLSTESTn1 : 10.02 3925.89 15.34 0.00 0.00 32543.18 5689.72 38844.97 00:17:17.996 =================================================================================================================== 00:17:17.996 Total : 3925.89 15.34 0.00 0.00 32543.18 5689.72 38844.97 00:17:17.996 0 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 74075 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74075 ']' 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74075 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74075 00:17:17.996 killing process with pid 74075 00:17:17.996 Received shutdown signal, test time was about 10.000000 seconds 00:17:17.996 00:17:17.996 Latency(us) 00:17:17.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.996 =================================================================================================================== 00:17:17.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74075' 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74075 00:17:17.996 [2024-07-24 23:17:40.395005] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:17.996 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74075 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 74047 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74047 ']' 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74047 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74047 00:17:18.270 killing process with pid 74047 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74047' 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74047 00:17:18.270 [2024-07-24 23:17:40.648652] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:18.270 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74047 00:17:18.528 23:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:18.528 23:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.528 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.528 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.528 23:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74214 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74214 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74214 ']' 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.529 23:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:18.787 [2024-07-24 23:17:41.031734] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:18.787 [2024-07-24 23:17:41.032014] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.787 [2024-07-24 23:17:41.171889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.046 [2024-07-24 23:17:41.285948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.046 [2024-07-24 23:17:41.286320] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.046 [2024-07-24 23:17:41.286345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.046 [2024-07-24 23:17:41.286355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.046 [2024-07-24 23:17:41.286362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.046 [2024-07-24 23:17:41.286396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.046 [2024-07-24 23:17:41.340128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wEpgwBz0EC 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wEpgwBz0EC 00:17:19.614 23:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:19.873 [2024-07-24 23:17:42.295204] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.873 23:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:20.131 23:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:20.390 [2024-07-24 23:17:42.807292] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.390 [2024-07-24 23:17:42.807521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.390 23:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:20.648 malloc0 00:17:20.648 23:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wEpgwBz0EC 00:17:21.215 [2024-07-24 23:17:43.638750] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74268 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74268 /var/tmp/bdevperf.sock 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74268 ']' 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.215 23:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.473 [2024-07-24 23:17:43.714119] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:21.473 [2024-07-24 23:17:43.714225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74268 ] 00:17:21.473 [2024-07-24 23:17:43.854540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.731 [2024-07-24 23:17:43.976665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.731 [2024-07-24 23:17:44.054850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:22.298 23:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.298 23:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:22.298 23:17:44 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wEpgwBz0EC 00:17:22.585 23:17:44 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:22.585 [2024-07-24 23:17:45.027474] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.843 nvme0n1 00:17:22.843 23:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:22.843 Running I/O for 1 seconds... 00:17:24.220 00:17:24.220 Latency(us) 00:17:24.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.220 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:24.220 Verification LBA range: start 0x0 length 0x2000 00:17:24.220 nvme0n1 : 1.02 3814.29 14.90 0.00 0.00 33087.98 6225.92 24546.21 00:17:24.220 =================================================================================================================== 00:17:24.220 Total : 3814.29 14.90 0.00 0.00 33087.98 6225.92 24546.21 00:17:24.220 0 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74268 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74268 ']' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74268 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74268 00:17:24.220 killing process with pid 74268 00:17:24.220 Received shutdown signal, test time was about 1.000000 seconds 00:17:24.220 00:17:24.220 Latency(us) 00:17:24.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.220 =================================================================================================================== 00:17:24.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74268' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74268 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74268 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74214 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74214 ']' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74214 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74214 00:17:24.220 killing process with pid 74214 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74214' 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74214 00:17:24.220 [2024-07-24 23:17:46.641959] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:24.220 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74214 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74325 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74325 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74325 ']' 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.788 23:17:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 [2024-07-24 23:17:47.050246] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:24.788 [2024-07-24 23:17:47.050348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.788 [2024-07-24 23:17:47.192723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.046 [2024-07-24 23:17:47.323656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.046 [2024-07-24 23:17:47.323735] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.046 [2024-07-24 23:17:47.323763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.046 [2024-07-24 23:17:47.323772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.046 [2024-07-24 23:17:47.323779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.046 [2024-07-24 23:17:47.323816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.046 [2024-07-24 23:17:47.397805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.613 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.613 [2024-07-24 23:17:48.057452] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.613 malloc0 00:17:25.613 [2024-07-24 23:17:48.093247] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:25.613 [2024-07-24 23:17:48.093471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=74357 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 74357 /var/tmp/bdevperf.sock 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74357 ']' 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.872 23:17:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:25.872 [2024-07-24 23:17:48.175642] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:25.872 [2024-07-24 23:17:48.176046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74357 ] 00:17:25.872 [2024-07-24 23:17:48.315513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.131 [2024-07-24 23:17:48.469404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.131 [2024-07-24 23:17:48.546304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:26.698 23:17:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.698 23:17:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:26.698 23:17:49 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wEpgwBz0EC 00:17:26.956 23:17:49 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:27.215 [2024-07-24 23:17:49.539117] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:27.215 nvme0n1 00:17:27.215 23:17:49 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:27.503 Running I/O for 1 seconds... 00:17:28.464 00:17:28.464 Latency(us) 00:17:28.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.464 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:28.464 Verification LBA range: start 0x0 length 0x2000 00:17:28.464 nvme0n1 : 1.03 4215.20 16.47 0.00 0.00 29986.13 10307.03 22878.02 00:17:28.464 =================================================================================================================== 00:17:28.464 Total : 4215.20 16.47 0.00 0.00 29986.13 10307.03 22878.02 00:17:28.464 0 00:17:28.464 23:17:50 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:17:28.464 23:17:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.464 23:17:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:28.464 23:17:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.464 23:17:50 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:17:28.464 "subsystems": [ 00:17:28.464 { 00:17:28.464 "subsystem": "keyring", 00:17:28.464 "config": [ 00:17:28.464 { 00:17:28.464 "method": "keyring_file_add_key", 00:17:28.464 "params": { 00:17:28.464 "name": "key0", 00:17:28.464 "path": "/tmp/tmp.wEpgwBz0EC" 00:17:28.464 } 00:17:28.464 } 00:17:28.464 ] 00:17:28.464 }, 00:17:28.464 { 00:17:28.464 "subsystem": "iobuf", 00:17:28.464 "config": [ 00:17:28.464 { 00:17:28.464 "method": "iobuf_set_options", 00:17:28.464 "params": { 00:17:28.464 "small_pool_count": 8192, 00:17:28.464 "large_pool_count": 1024, 00:17:28.464 "small_bufsize": 8192, 00:17:28.464 "large_bufsize": 135168 00:17:28.464 } 00:17:28.464 } 00:17:28.464 ] 00:17:28.464 }, 00:17:28.464 { 00:17:28.464 "subsystem": "sock", 00:17:28.464 "config": [ 00:17:28.464 { 00:17:28.464 "method": "sock_set_default_impl", 00:17:28.464 "params": { 00:17:28.464 "impl_name": "uring" 00:17:28.464 } 00:17:28.464 }, 00:17:28.464 { 00:17:28.464 "method": "sock_impl_set_options", 00:17:28.464 "params": { 00:17:28.464 "impl_name": "ssl", 00:17:28.464 "recv_buf_size": 4096, 00:17:28.464 "send_buf_size": 4096, 00:17:28.464 "enable_recv_pipe": true, 00:17:28.464 "enable_quickack": false, 00:17:28.464 "enable_placement_id": 0, 00:17:28.464 "enable_zerocopy_send_server": true, 00:17:28.464 "enable_zerocopy_send_client": false, 00:17:28.465 "zerocopy_threshold": 0, 00:17:28.465 "tls_version": 0, 00:17:28.465 "enable_ktls": false 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "sock_impl_set_options", 00:17:28.465 "params": { 00:17:28.465 "impl_name": "posix", 00:17:28.465 "recv_buf_size": 2097152, 00:17:28.465 "send_buf_size": 2097152, 00:17:28.465 "enable_recv_pipe": true, 00:17:28.465 "enable_quickack": false, 00:17:28.465 "enable_placement_id": 0, 00:17:28.465 "enable_zerocopy_send_server": true, 00:17:28.465 "enable_zerocopy_send_client": false, 00:17:28.465 "zerocopy_threshold": 0, 00:17:28.465 "tls_version": 0, 00:17:28.465 "enable_ktls": false 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "sock_impl_set_options", 00:17:28.465 "params": { 00:17:28.465 "impl_name": "uring", 00:17:28.465 "recv_buf_size": 2097152, 00:17:28.465 "send_buf_size": 2097152, 00:17:28.465 "enable_recv_pipe": true, 00:17:28.465 "enable_quickack": false, 00:17:28.465 "enable_placement_id": 0, 00:17:28.465 "enable_zerocopy_send_server": false, 00:17:28.465 "enable_zerocopy_send_client": false, 00:17:28.465 "zerocopy_threshold": 0, 00:17:28.465 "tls_version": 0, 00:17:28.465 "enable_ktls": false 00:17:28.465 } 00:17:28.465 } 00:17:28.465 ] 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "subsystem": "vmd", 00:17:28.465 "config": [] 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "subsystem": "accel", 00:17:28.465 "config": [ 00:17:28.465 { 00:17:28.465 "method": "accel_set_options", 00:17:28.465 "params": { 00:17:28.465 "small_cache_size": 128, 00:17:28.465 "large_cache_size": 16, 00:17:28.465 "task_count": 2048, 00:17:28.465 "sequence_count": 2048, 00:17:28.465 "buf_count": 2048 00:17:28.465 } 00:17:28.465 } 00:17:28.465 ] 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "subsystem": "bdev", 00:17:28.465 "config": [ 00:17:28.465 { 00:17:28.465 "method": "bdev_set_options", 00:17:28.465 "params": { 00:17:28.465 "bdev_io_pool_size": 65535, 00:17:28.465 "bdev_io_cache_size": 256, 00:17:28.465 "bdev_auto_examine": true, 00:17:28.465 "iobuf_small_cache_size": 128, 00:17:28.465 "iobuf_large_cache_size": 16 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "bdev_raid_set_options", 00:17:28.465 "params": { 00:17:28.465 "process_window_size_kb": 1024 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "bdev_iscsi_set_options", 00:17:28.465 "params": { 00:17:28.465 "timeout_sec": 30 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "bdev_nvme_set_options", 00:17:28.465 "params": { 00:17:28.465 "action_on_timeout": "none", 00:17:28.465 "timeout_us": 0, 00:17:28.465 "timeout_admin_us": 0, 00:17:28.465 "keep_alive_timeout_ms": 10000, 00:17:28.465 "arbitration_burst": 0, 00:17:28.465 "low_priority_weight": 0, 00:17:28.465 "medium_priority_weight": 0, 00:17:28.465 "high_priority_weight": 0, 00:17:28.465 "nvme_adminq_poll_period_us": 10000, 00:17:28.465 "nvme_ioq_poll_period_us": 0, 00:17:28.465 "io_queue_requests": 0, 00:17:28.465 "delay_cmd_submit": true, 00:17:28.465 "transport_retry_count": 4, 00:17:28.465 "bdev_retry_count": 3, 00:17:28.465 "transport_ack_timeout": 0, 00:17:28.465 "ctrlr_loss_timeout_sec": 0, 00:17:28.465 "reconnect_delay_sec": 0, 00:17:28.465 "fast_io_fail_timeout_sec": 0, 00:17:28.465 "disable_auto_failback": false, 00:17:28.465 "generate_uuids": false, 00:17:28.465 "transport_tos": 0, 00:17:28.465 "nvme_error_stat": false, 00:17:28.465 "rdma_srq_size": 0, 00:17:28.465 "io_path_stat": false, 00:17:28.465 "allow_accel_sequence": false, 00:17:28.465 "rdma_max_cq_size": 0, 00:17:28.465 "rdma_cm_event_timeout_ms": 0, 00:17:28.465 "dhchap_digests": [ 00:17:28.465 "sha256", 00:17:28.465 "sha384", 00:17:28.465 "sha512" 00:17:28.465 ], 00:17:28.465 "dhchap_dhgroups": [ 00:17:28.465 "null", 00:17:28.465 "ffdhe2048", 00:17:28.465 "ffdhe3072", 00:17:28.465 "ffdhe4096", 00:17:28.465 "ffdhe6144", 00:17:28.465 "ffdhe8192" 00:17:28.465 ] 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "bdev_nvme_set_hotplug", 00:17:28.465 "params": { 00:17:28.465 "period_us": 100000, 00:17:28.465 "enable": false 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "bdev_malloc_create", 00:17:28.465 "params": { 00:17:28.465 "name": "malloc0", 00:17:28.465 "num_blocks": 8192, 00:17:28.465 "block_size": 4096, 00:17:28.465 "physical_block_size": 4096, 00:17:28.465 "uuid": "ab940cc8-e505-423b-91a7-5291ece1fc26", 00:17:28.465 "optimal_io_boundary": 0 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "bdev_wait_for_examine" 00:17:28.465 } 00:17:28.465 ] 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "subsystem": "nbd", 00:17:28.465 "config": [] 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "subsystem": "scheduler", 00:17:28.465 "config": [ 00:17:28.465 { 00:17:28.465 "method": "framework_set_scheduler", 00:17:28.465 "params": { 00:17:28.465 "name": "static" 00:17:28.465 } 00:17:28.465 } 00:17:28.465 ] 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "subsystem": "nvmf", 00:17:28.465 "config": [ 00:17:28.465 { 00:17:28.465 "method": "nvmf_set_config", 00:17:28.465 "params": { 00:17:28.465 "discovery_filter": "match_any", 00:17:28.465 "admin_cmd_passthru": { 00:17:28.465 "identify_ctrlr": false 00:17:28.465 } 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_set_max_subsystems", 00:17:28.465 "params": { 00:17:28.465 "max_subsystems": 1024 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_set_crdt", 00:17:28.465 "params": { 00:17:28.465 "crdt1": 0, 00:17:28.465 "crdt2": 0, 00:17:28.465 "crdt3": 0 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_create_transport", 00:17:28.465 "params": { 00:17:28.465 "trtype": "TCP", 00:17:28.465 "max_queue_depth": 128, 00:17:28.465 "max_io_qpairs_per_ctrlr": 127, 00:17:28.465 "in_capsule_data_size": 4096, 00:17:28.465 "max_io_size": 131072, 00:17:28.465 "io_unit_size": 131072, 00:17:28.465 "max_aq_depth": 128, 00:17:28.465 "num_shared_buffers": 511, 00:17:28.465 "buf_cache_size": 4294967295, 00:17:28.465 "dif_insert_or_strip": false, 00:17:28.465 "zcopy": false, 00:17:28.465 "c2h_success": false, 00:17:28.465 "sock_priority": 0, 00:17:28.465 "abort_timeout_sec": 1, 00:17:28.465 "ack_timeout": 0, 00:17:28.465 "data_wr_pool_size": 0 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_create_subsystem", 00:17:28.465 "params": { 00:17:28.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.465 "allow_any_host": false, 00:17:28.465 "serial_number": "00000000000000000000", 00:17:28.465 "model_number": "SPDK bdev Controller", 00:17:28.465 "max_namespaces": 32, 00:17:28.465 "min_cntlid": 1, 00:17:28.465 "max_cntlid": 65519, 00:17:28.465 "ana_reporting": false 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_subsystem_add_host", 00:17:28.465 "params": { 00:17:28.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.465 "host": "nqn.2016-06.io.spdk:host1", 00:17:28.465 "psk": "key0" 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_subsystem_add_ns", 00:17:28.465 "params": { 00:17:28.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.465 "namespace": { 00:17:28.465 "nsid": 1, 00:17:28.465 "bdev_name": "malloc0", 00:17:28.465 "nguid": "AB940CC8E505423B91A75291ECE1FC26", 00:17:28.465 "uuid": "ab940cc8-e505-423b-91a7-5291ece1fc26", 00:17:28.465 "no_auto_visible": false 00:17:28.465 } 00:17:28.465 } 00:17:28.465 }, 00:17:28.465 { 00:17:28.465 "method": "nvmf_subsystem_add_listener", 00:17:28.465 "params": { 00:17:28.465 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.465 "listen_address": { 00:17:28.465 "trtype": "TCP", 00:17:28.465 "adrfam": "IPv4", 00:17:28.465 "traddr": "10.0.0.2", 00:17:28.465 "trsvcid": "4420" 00:17:28.465 }, 00:17:28.465 "secure_channel": false, 00:17:28.465 "sock_impl": "ssl" 00:17:28.465 } 00:17:28.465 } 00:17:28.465 ] 00:17:28.465 } 00:17:28.465 ] 00:17:28.465 }' 00:17:28.465 23:17:50 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:29.073 23:17:51 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:17:29.073 "subsystems": [ 00:17:29.073 { 00:17:29.073 "subsystem": "keyring", 00:17:29.073 "config": [ 00:17:29.073 { 00:17:29.073 "method": "keyring_file_add_key", 00:17:29.073 "params": { 00:17:29.073 "name": "key0", 00:17:29.073 "path": "/tmp/tmp.wEpgwBz0EC" 00:17:29.073 } 00:17:29.073 } 00:17:29.073 ] 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "subsystem": "iobuf", 00:17:29.073 "config": [ 00:17:29.073 { 00:17:29.073 "method": "iobuf_set_options", 00:17:29.073 "params": { 00:17:29.073 "small_pool_count": 8192, 00:17:29.073 "large_pool_count": 1024, 00:17:29.073 "small_bufsize": 8192, 00:17:29.073 "large_bufsize": 135168 00:17:29.073 } 00:17:29.073 } 00:17:29.073 ] 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "subsystem": "sock", 00:17:29.073 "config": [ 00:17:29.073 { 00:17:29.073 "method": "sock_set_default_impl", 00:17:29.073 "params": { 00:17:29.073 "impl_name": "uring" 00:17:29.073 } 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "method": "sock_impl_set_options", 00:17:29.073 "params": { 00:17:29.073 "impl_name": "ssl", 00:17:29.073 "recv_buf_size": 4096, 00:17:29.073 "send_buf_size": 4096, 00:17:29.073 "enable_recv_pipe": true, 00:17:29.073 "enable_quickack": false, 00:17:29.073 "enable_placement_id": 0, 00:17:29.073 "enable_zerocopy_send_server": true, 00:17:29.073 "enable_zerocopy_send_client": false, 00:17:29.073 "zerocopy_threshold": 0, 00:17:29.073 "tls_version": 0, 00:17:29.073 "enable_ktls": false 00:17:29.073 } 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "method": "sock_impl_set_options", 00:17:29.073 "params": { 00:17:29.073 "impl_name": "posix", 00:17:29.073 "recv_buf_size": 2097152, 00:17:29.073 "send_buf_size": 2097152, 00:17:29.073 "enable_recv_pipe": true, 00:17:29.073 "enable_quickack": false, 00:17:29.073 "enable_placement_id": 0, 00:17:29.073 "enable_zerocopy_send_server": true, 00:17:29.073 "enable_zerocopy_send_client": false, 00:17:29.073 "zerocopy_threshold": 0, 00:17:29.073 "tls_version": 0, 00:17:29.073 "enable_ktls": false 00:17:29.073 } 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "method": "sock_impl_set_options", 00:17:29.073 "params": { 00:17:29.073 "impl_name": "uring", 00:17:29.073 "recv_buf_size": 2097152, 00:17:29.073 "send_buf_size": 2097152, 00:17:29.073 "enable_recv_pipe": true, 00:17:29.073 "enable_quickack": false, 00:17:29.073 "enable_placement_id": 0, 00:17:29.073 "enable_zerocopy_send_server": false, 00:17:29.073 "enable_zerocopy_send_client": false, 00:17:29.073 "zerocopy_threshold": 0, 00:17:29.073 "tls_version": 0, 00:17:29.073 "enable_ktls": false 00:17:29.073 } 00:17:29.073 } 00:17:29.073 ] 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "subsystem": "vmd", 00:17:29.073 "config": [] 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "subsystem": "accel", 00:17:29.073 "config": [ 00:17:29.073 { 00:17:29.073 "method": "accel_set_options", 00:17:29.073 "params": { 00:17:29.073 "small_cache_size": 128, 00:17:29.073 "large_cache_size": 16, 00:17:29.073 "task_count": 2048, 00:17:29.073 "sequence_count": 2048, 00:17:29.073 "buf_count": 2048 00:17:29.073 } 00:17:29.073 } 00:17:29.073 ] 00:17:29.073 }, 00:17:29.073 { 00:17:29.073 "subsystem": "bdev", 00:17:29.073 "config": [ 00:17:29.073 { 00:17:29.073 "method": "bdev_set_options", 00:17:29.073 "params": { 00:17:29.074 "bdev_io_pool_size": 65535, 00:17:29.074 "bdev_io_cache_size": 256, 00:17:29.074 "bdev_auto_examine": true, 00:17:29.074 "iobuf_small_cache_size": 128, 00:17:29.074 "iobuf_large_cache_size": 16 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_raid_set_options", 00:17:29.074 "params": { 00:17:29.074 "process_window_size_kb": 1024 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_iscsi_set_options", 00:17:29.074 "params": { 00:17:29.074 "timeout_sec": 30 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_nvme_set_options", 00:17:29.074 "params": { 00:17:29.074 "action_on_timeout": "none", 00:17:29.074 "timeout_us": 0, 00:17:29.074 "timeout_admin_us": 0, 00:17:29.074 "keep_alive_timeout_ms": 10000, 00:17:29.074 "arbitration_burst": 0, 00:17:29.074 "low_priority_weight": 0, 00:17:29.074 "medium_priority_weight": 0, 00:17:29.074 "high_priority_weight": 0, 00:17:29.074 "nvme_adminq_poll_period_us": 10000, 00:17:29.074 "nvme_ioq_poll_period_us": 0, 00:17:29.074 "io_queue_requests": 512, 00:17:29.074 "delay_cmd_submit": true, 00:17:29.074 "transport_retry_count": 4, 00:17:29.074 "bdev_retry_count": 3, 00:17:29.074 "transport_ack_timeout": 0, 00:17:29.074 "ctrlr_loss_timeout_sec": 0, 00:17:29.074 "reconnect_delay_sec": 0, 00:17:29.074 "fast_io_fail_timeout_sec": 0, 00:17:29.074 "disable_auto_failback": false, 00:17:29.074 "generate_uuids": false, 00:17:29.074 "transport_tos": 0, 00:17:29.074 "nvme_error_stat": false, 00:17:29.074 "rdma_srq_size": 0, 00:17:29.074 "io_path_stat": false, 00:17:29.074 "allow_accel_sequence": false, 00:17:29.074 "rdma_max_cq_size": 0, 00:17:29.074 "rdma_cm_event_timeout_ms": 0, 00:17:29.074 "dhchap_digests": [ 00:17:29.074 "sha256", 00:17:29.074 "sha384", 00:17:29.074 "sha512" 00:17:29.074 ], 00:17:29.074 "dhchap_dhgroups": [ 00:17:29.074 "null", 00:17:29.074 "ffdhe2048", 00:17:29.074 "ffdhe3072", 00:17:29.074 "ffdhe4096", 00:17:29.074 "ffdhe6144", 00:17:29.074 "ffdhe8192" 00:17:29.074 ] 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_nvme_attach_controller", 00:17:29.074 "params": { 00:17:29.074 "name": "nvme0", 00:17:29.074 "trtype": "TCP", 00:17:29.074 "adrfam": "IPv4", 00:17:29.074 "traddr": "10.0.0.2", 00:17:29.074 "trsvcid": "4420", 00:17:29.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.074 "prchk_reftag": false, 00:17:29.074 "prchk_guard": false, 00:17:29.074 "ctrlr_loss_timeout_sec": 0, 00:17:29.074 "reconnect_delay_sec": 0, 00:17:29.074 "fast_io_fail_timeout_sec": 0, 00:17:29.074 "psk": "key0", 00:17:29.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.074 "hdgst": false, 00:17:29.074 "ddgst": false 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_nvme_set_hotplug", 00:17:29.074 "params": { 00:17:29.074 "period_us": 100000, 00:17:29.074 "enable": false 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_enable_histogram", 00:17:29.074 "params": { 00:17:29.074 "name": "nvme0n1", 00:17:29.074 "enable": true 00:17:29.074 } 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "method": "bdev_wait_for_examine" 00:17:29.074 } 00:17:29.074 ] 00:17:29.074 }, 00:17:29.074 { 00:17:29.074 "subsystem": "nbd", 00:17:29.074 "config": [] 00:17:29.074 } 00:17:29.074 ] 00:17:29.074 }' 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 74357 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74357 ']' 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74357 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74357 00:17:29.074 killing process with pid 74357 00:17:29.074 Received shutdown signal, test time was about 1.000000 seconds 00:17:29.074 00:17:29.074 Latency(us) 00:17:29.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.074 =================================================================================================================== 00:17:29.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74357' 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74357 00:17:29.074 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74357 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 74325 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74325 ']' 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74325 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74325 00:17:29.332 killing process with pid 74325 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74325' 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74325 00:17:29.332 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74325 00:17:29.658 23:17:51 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:17:29.658 23:17:51 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:17:29.658 "subsystems": [ 00:17:29.658 { 00:17:29.658 "subsystem": "keyring", 00:17:29.658 "config": [ 00:17:29.658 { 00:17:29.658 "method": "keyring_file_add_key", 00:17:29.658 "params": { 00:17:29.658 "name": "key0", 00:17:29.658 "path": "/tmp/tmp.wEpgwBz0EC" 00:17:29.658 } 00:17:29.658 } 00:17:29.658 ] 00:17:29.658 }, 00:17:29.658 { 00:17:29.658 "subsystem": "iobuf", 00:17:29.658 "config": [ 00:17:29.658 { 00:17:29.658 "method": "iobuf_set_options", 00:17:29.658 "params": { 00:17:29.658 "small_pool_count": 8192, 00:17:29.658 "large_pool_count": 1024, 00:17:29.658 "small_bufsize": 8192, 00:17:29.658 "large_bufsize": 135168 00:17:29.658 } 00:17:29.658 } 00:17:29.658 ] 00:17:29.658 }, 00:17:29.658 { 00:17:29.658 "subsystem": "sock", 00:17:29.658 "config": [ 00:17:29.658 { 00:17:29.658 "method": "sock_set_default_impl", 00:17:29.658 "params": { 00:17:29.658 "impl_name": "uring" 00:17:29.658 } 00:17:29.658 }, 00:17:29.658 { 00:17:29.658 "method": "sock_impl_set_options", 00:17:29.658 "params": { 00:17:29.658 "impl_name": "ssl", 00:17:29.658 "recv_buf_size": 4096, 00:17:29.658 "send_buf_size": 4096, 00:17:29.658 "enable_recv_pipe": true, 00:17:29.658 "enable_quickack": false, 00:17:29.659 "enable_placement_id": 0, 00:17:29.659 "enable_zerocopy_send_server": true, 00:17:29.659 "enable_zerocopy_send_client": false, 00:17:29.659 "zerocopy_threshold": 0, 00:17:29.659 "tls_version": 0, 00:17:29.659 "enable_ktls": false 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "sock_impl_set_options", 00:17:29.659 "params": { 00:17:29.659 "impl_name": "posix", 00:17:29.659 "recv_buf_size": 2097152, 00:17:29.659 "send_buf_size": 2097152, 00:17:29.659 "enable_recv_pipe": true, 00:17:29.659 "enable_quickack": false, 00:17:29.659 "enable_placement_id": 0, 00:17:29.659 "enable_zerocopy_send_server": true, 00:17:29.659 "enable_zerocopy_send_client": false, 00:17:29.659 "zerocopy_threshold": 0, 00:17:29.659 "tls_version": 0, 00:17:29.659 "enable_ktls": false 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "sock_impl_set_options", 00:17:29.659 "params": { 00:17:29.659 "impl_name": "uring", 00:17:29.659 "recv_buf_size": 2097152, 00:17:29.659 "send_buf_size": 2097152, 00:17:29.659 "enable_recv_pipe": true, 00:17:29.659 "enable_quickack": false, 00:17:29.659 "enable_placement_id": 0, 00:17:29.659 "enable_zerocopy_send_server": false, 00:17:29.659 "enable_zerocopy_send_client": false, 00:17:29.659 "zerocopy_threshold": 0, 00:17:29.659 "tls_version": 0, 00:17:29.659 "enable_ktls": false 00:17:29.659 } 00:17:29.659 } 00:17:29.659 ] 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "subsystem": "vmd", 00:17:29.659 "config": [] 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "subsystem": "accel", 00:17:29.659 "config": [ 00:17:29.659 { 00:17:29.659 "method": "accel_set_options", 00:17:29.659 "params": { 00:17:29.659 "small_cache_size": 128, 00:17:29.659 "large_cache_size": 16, 00:17:29.659 "task_count": 2048, 00:17:29.659 "sequence_count": 2048, 00:17:29.659 "buf_count": 2048 00:17:29.659 } 00:17:29.659 } 00:17:29.659 ] 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "subsystem": "bdev", 00:17:29.659 "config": [ 00:17:29.659 { 00:17:29.659 "method": "bdev_set_options", 00:17:29.659 "params": { 00:17:29.659 "bdev_io_pool_size": 65535, 00:17:29.659 "bdev_io_cache_size": 256, 00:17:29.659 "bdev_auto_examine": true, 00:17:29.659 "iobuf_small_cache_size": 128, 00:17:29.659 "iobuf_large_cache_size": 16 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "bdev_raid_set_options", 00:17:29.659 "params": { 00:17:29.659 "process_window_size_kb": 1024 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "bdev_iscsi_set_options", 00:17:29.659 "params": { 00:17:29.659 "timeout_sec": 30 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "bdev_nvme_set_options", 00:17:29.659 "params": { 00:17:29.659 "action_on_timeout": "none", 00:17:29.659 "timeout_us": 0, 00:17:29.659 "timeout_admin_us": 0, 00:17:29.659 "keep_alive_timeout_ms": 10000, 00:17:29.659 "arbitration_burst": 0, 00:17:29.659 "low_priority_weight": 0, 00:17:29.659 "medium_priority_weight": 0, 00:17:29.659 "high_priority_weight": 0, 00:17:29.659 "nvme_adminq_poll_period_us": 10000, 00:17:29.659 "nvme_ioq_poll_period_us": 0, 00:17:29.659 "io_queue_requests": 0, 00:17:29.659 "delay_cmd_submit": true, 00:17:29.659 "transport_retry_count": 4, 00:17:29.659 "bdev_retry_count": 3, 00:17:29.659 "transport_ack_timeout": 0, 00:17:29.659 "ctrlr_loss_timeout_sec": 0, 00:17:29.659 "reconnect_delay_sec": 0, 00:17:29.659 "fast_io_fail_timeout_sec": 0, 00:17:29.659 "disable_auto_failback": false, 00:17:29.659 "generate_uuids": false, 00:17:29.659 "transport_tos": 0, 00:17:29.659 "nvme_error_stat": false, 00:17:29.659 "rdma_srq_size": 0, 00:17:29.659 "io_path_stat": false, 00:17:29.659 "allow_accel_sequence": false, 00:17:29.659 "rdma_max_cq_size": 0, 00:17:29.659 "rdma_cm_event_timeout_ms": 0, 00:17:29.659 "dhchap_digests": [ 00:17:29.659 "sha256", 00:17:29.659 "sha384", 00:17:29.659 "sha512" 00:17:29.659 ], 00:17:29.659 "dhchap_dhgroups": [ 00:17:29.659 "null", 00:17:29.659 "ffdhe2048", 00:17:29.659 "ffdhe3072", 00:17:29.659 "ffdhe4096", 00:17:29.659 "ffdhe6144", 00:17:29.659 "ffdhe8192" 00:17:29.659 ] 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "bdev_nvme_set_hotplug", 00:17:29.659 "params": { 00:17:29.659 "period_us": 100000, 00:17:29.659 "enable": false 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "bdev_malloc_create", 00:17:29.659 "params": { 00:17:29.659 "name": "malloc0", 00:17:29.659 "num_blocks": 8192, 00:17:29.659 "block_size": 4096, 00:17:29.659 "physical_block_size": 4096, 00:17:29.659 "uuid": "ab940cc8-e505-423b-91a7-5291ece1fc26", 00:17:29.659 "optimal_io_boundary": 0 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "bdev_wait_for_examine" 00:17:29.659 } 00:17:29.659 ] 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "subsystem": "nbd", 00:17:29.659 "config": [] 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "subsystem": "scheduler", 00:17:29.659 "config": [ 00:17:29.659 { 00:17:29.659 "method": "framework_set_scheduler", 00:17:29.659 "params": { 00:17:29.659 "name": "static" 00:17:29.659 } 00:17:29.659 } 00:17:29.659 ] 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "subsystem": "nvmf", 00:17:29.659 "config": [ 00:17:29.659 { 00:17:29.659 "method": "nvmf_set_config", 00:17:29.659 "params": { 00:17:29.659 "discovery_filter": "match_any", 00:17:29.659 "admin_cmd_passthru": { 00:17:29.659 "identify_ctrlr": false 00:17:29.659 } 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_set_max_subsystems", 00:17:29.659 "params": { 00:17:29.659 "max_subsystems": 1024 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_set_crdt", 00:17:29.659 "params": { 00:17:29.659 "crdt1": 0, 00:17:29.659 "crdt2": 0, 00:17:29.659 "crdt3": 0 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_create_transport", 00:17:29.659 "params": { 00:17:29.659 "trtype": "TCP", 00:17:29.659 "max_queue_depth": 128, 00:17:29.659 "max_io_qpairs_per_ctrlr": 127, 00:17:29.659 "in_capsule_data_size": 4096, 00:17:29.659 "max_io_size": 131072, 00:17:29.659 "io_unit_size": 131072, 00:17:29.659 "max_aq_depth": 128, 00:17:29.659 "num_shared_buffers": 511, 00:17:29.659 "buf_cache_size": 4294967295, 00:17:29.659 "dif_insert_or_strip": false, 00:17:29.659 "zcopy": false, 00:17:29.659 "c2h_success": false, 00:17:29.659 "sock_priority": 0, 00:17:29.659 "abort_timeout_sec": 1, 00:17:29.659 "ack_timeout": 0, 00:17:29.659 "data_wr_pool_size": 0 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_create_subsystem", 00:17:29.659 "params": { 00:17:29.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.659 "allow_any_host": false, 00:17:29.659 "serial_number": "00000000000000000000", 00:17:29.659 "model_number": "SPDK bdev Controller", 00:17:29.659 "max_namespaces": 32, 00:17:29.659 "min_cntlid": 1, 00:17:29.659 "max_cntlid": 65519, 00:17:29.659 "ana_reporting": false 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_subsystem_add_host", 00:17:29.659 "params": { 00:17:29.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.659 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.659 "psk": "key0" 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_subsystem_add_ns", 00:17:29.659 "params": { 00:17:29.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.659 "namespace": { 00:17:29.659 "nsid": 1, 00:17:29.659 "bdev_name": "malloc0", 00:17:29.659 "nguid": "AB940CC8E505423B91A75291ECE1FC26", 00:17:29.659 "uuid": "ab940cc8-e505-423b-91a7-5291ece1fc26", 00:17:29.659 "no_auto_visible": false 00:17:29.659 } 00:17:29.659 } 00:17:29.659 }, 00:17:29.659 { 00:17:29.659 "method": "nvmf_subsystem_add_listener", 00:17:29.659 "params": { 00:17:29.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.659 "listen_address": { 00:17:29.659 "trtype": "TCP", 00:17:29.659 "adrfam": "IPv4", 00:17:29.659 "traddr": "10.0.0.2", 00:17:29.659 "trsvcid": "4420" 00:17:29.659 }, 00:17:29.659 "secure_channel": false, 00:17:29.659 "sock_impl": "ssl" 00:17:29.659 } 00:17:29.659 } 00:17:29.659 ] 00:17:29.659 } 00:17:29.659 ] 00:17:29.659 }' 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74418 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74418 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74418 ']' 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.659 23:17:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.659 [2024-07-24 23:17:52.020086] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:29.659 [2024-07-24 23:17:52.020188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.941 [2024-07-24 23:17:52.159611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.941 [2024-07-24 23:17:52.316546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.941 [2024-07-24 23:17:52.316913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.941 [2024-07-24 23:17:52.317071] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.941 [2024-07-24 23:17:52.317088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.941 [2024-07-24 23:17:52.317097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.941 [2024-07-24 23:17:52.317239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.199 [2024-07-24 23:17:52.507610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:30.199 [2024-07-24 23:17:52.599662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.199 [2024-07-24 23:17:52.631591] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.199 [2024-07-24 23:17:52.631814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.765 23:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:30.765 23:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:30.765 23:17:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:30.765 23:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.765 23:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=74451 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 74451 /var/tmp/bdevperf.sock 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74451 ']' 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:30.765 23:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:17:30.765 "subsystems": [ 00:17:30.765 { 00:17:30.765 "subsystem": "keyring", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "keyring_file_add_key", 00:17:30.765 "params": { 00:17:30.765 "name": "key0", 00:17:30.765 "path": "/tmp/tmp.wEpgwBz0EC" 00:17:30.765 } 00:17:30.765 } 00:17:30.765 ] 00:17:30.765 }, 00:17:30.765 { 00:17:30.765 "subsystem": "iobuf", 00:17:30.765 "config": [ 00:17:30.765 { 00:17:30.765 "method": "iobuf_set_options", 00:17:30.765 "params": { 00:17:30.765 "small_pool_count": 8192, 00:17:30.765 "large_pool_count": 1024, 00:17:30.765 "small_bufsize": 8192, 00:17:30.765 "large_bufsize": 135168 00:17:30.766 } 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "sock", 00:17:30.766 "config": [ 00:17:30.766 { 00:17:30.766 "method": "sock_set_default_impl", 00:17:30.766 "params": { 00:17:30.766 "impl_name": "uring" 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "sock_impl_set_options", 00:17:30.766 "params": { 00:17:30.766 "impl_name": "ssl", 00:17:30.766 "recv_buf_size": 4096, 00:17:30.766 "send_buf_size": 4096, 00:17:30.766 "enable_recv_pipe": true, 00:17:30.766 "enable_quickack": false, 00:17:30.766 "enable_placement_id": 0, 00:17:30.766 "enable_zerocopy_send_server": true, 00:17:30.766 "enable_zerocopy_send_client": false, 00:17:30.766 "zerocopy_threshold": 0, 00:17:30.766 "tls_version": 0, 00:17:30.766 "enable_ktls": false 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "sock_impl_set_options", 00:17:30.766 "params": { 00:17:30.766 "impl_name": "posix", 00:17:30.766 "recv_buf_size": 2097152, 00:17:30.766 "send_buf_size": 2097152, 00:17:30.766 "enable_recv_pipe": true, 00:17:30.766 "enable_quickack": false, 00:17:30.766 "enable_placement_id": 0, 00:17:30.766 "enable_zerocopy_send_server": true, 00:17:30.766 "enable_zerocopy_send_client": false, 00:17:30.766 "zerocopy_threshold": 0, 00:17:30.766 "tls_version": 0, 00:17:30.766 "enable_ktls": false 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "sock_impl_set_options", 00:17:30.766 "params": { 00:17:30.766 "impl_name": "uring", 00:17:30.766 "recv_buf_size": 2097152, 00:17:30.766 "send_buf_size": 2097152, 00:17:30.766 "enable_recv_pipe": true, 00:17:30.766 "enable_quickack": false, 00:17:30.766 "enable_placement_id": 0, 00:17:30.766 "enable_zerocopy_send_server": false, 00:17:30.766 "enable_zerocopy_send_client": false, 00:17:30.766 "zerocopy_threshold": 0, 00:17:30.766 "tls_version": 0, 00:17:30.766 "enable_ktls": false 00:17:30.766 } 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "vmd", 00:17:30.766 "config": [] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "accel", 00:17:30.766 "config": [ 00:17:30.766 { 00:17:30.766 "method": "accel_set_options", 00:17:30.766 "params": { 00:17:30.766 "small_cache_size": 128, 00:17:30.766 "large_cache_size": 16, 00:17:30.766 "task_count": 2048, 00:17:30.766 "sequence_count": 2048, 00:17:30.766 "buf_count": 2048 00:17:30.766 } 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "bdev", 00:17:30.766 "config": [ 00:17:30.766 { 00:17:30.766 "method": "bdev_set_options", 00:17:30.766 "params": { 00:17:30.766 "bdev_io_pool_size": 65535, 00:17:30.766 "bdev_io_cache_size": 256, 00:17:30.766 "bdev_auto_examine": true, 00:17:30.766 "iobuf_small_cache_size": 128, 00:17:30.766 "iobuf_large_cache_size": 16 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_raid_set_options", 00:17:30.766 "params": { 00:17:30.766 "process_window_size_kb": 1024 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_iscsi_set_options", 00:17:30.766 "params": { 00:17:30.766 "timeout_sec": 30 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_nvme_set_options", 00:17:30.766 "params": { 00:17:30.766 "action_on_timeout": "none", 00:17:30.766 "timeout_us": 0, 00:17:30.766 "timeout_admin_us": 0, 00:17:30.766 "keep_alive_timeout_ms": 10000, 00:17:30.766 "arbitration_burst": 0, 00:17:30.766 "low_priority_weight": 0, 00:17:30.766 "medium_priority_weight": 0, 00:17:30.766 "high_priority_weight": 0, 00:17:30.766 "nvme_adminq_poll_period_us": 10000, 00:17:30.766 "nvme_ioq_poll_period_us": 0, 00:17:30.766 "io_queue_requests": 512, 00:17:30.766 "delay_cmd_submit": true, 00:17:30.766 "transport_retry_count": 4, 00:17:30.766 "bdev_retry_count": 3, 00:17:30.766 "transport_ack_timeout": 0, 00:17:30.766 "ctrlr_loss_timeout_sec": 0, 00:17:30.766 "reconnect_delay_sec": 0, 00:17:30.766 "fast_io_fail_timeout_sec": 0, 00:17:30.766 "disable_auto_failback": false, 00:17:30.766 "generate_uuids": false, 00:17:30.766 "transport_tos": 0, 00:17:30.766 "nvme_error_stat": false, 00:17:30.766 "rdma_srq_size": 0, 00:17:30.766 "io_path_stat": false, 00:17:30.766 "allow_accel_sequence": false, 00:17:30.766 "rdma_max_cq_size": 0, 00:17:30.766 "rdma_cm_event_timeout_ms": 0, 00:17:30.766 "dhchap_digests": [ 00:17:30.766 "sha256", 00:17:30.766 "sha384", 00:17:30.766 "sha512" 00:17:30.766 ], 00:17:30.766 "dhchap_dhgroups": [ 00:17:30.766 "null", 00:17:30.766 "ffdhe2048", 00:17:30.766 "ffdhe3072", 00:17:30.766 "ffdhe4096", 00:17:30.766 "ffdhe6144", 00:17:30.766 "ffdhe8192" 00:17:30.766 ] 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_nvme_attach_controller", 00:17:30.766 "params": { 00:17:30.766 "name": "nvme0", 00:17:30.766 "trtype": "TCP", 00:17:30.766 "adrfam": "IPv4", 00:17:30.766 "traddr": "10.0.0.2", 00:17:30.766 "trsvcid": "4420", 00:17:30.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.766 "prchk_reftag": false, 00:17:30.766 "prchk_guard": false, 00:17:30.766 "ctrlr_loss_timeout_sec": 0, 00:17:30.766 "reconnect_delay_sec": 0, 00:17:30.766 "fast_io_fail_timeout_sec": 0, 00:17:30.766 "psk": "key0", 00:17:30.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.766 "hdgst": false, 00:17:30.766 "ddgst": false 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_nvme_set_hotplug", 00:17:30.766 "params": { 00:17:30.766 "period_us": 100000, 00:17:30.766 "enable": false 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_enable_histogram", 00:17:30.766 "params": { 00:17:30.766 "name": "nvme0n1", 00:17:30.766 "enable": true 00:17:30.766 } 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "method": "bdev_wait_for_examine" 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }, 00:17:30.766 { 00:17:30.766 "subsystem": "nbd", 00:17:30.766 "config": [] 00:17:30.766 } 00:17:30.766 ] 00:17:30.766 }' 00:17:30.766 23:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.766 23:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.766 [2024-07-24 23:17:53.071102] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:30.766 [2024-07-24 23:17:53.071417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74451 ] 00:17:30.766 [2024-07-24 23:17:53.211166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.029 [2024-07-24 23:17:53.347560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.029 [2024-07-24 23:17:53.505095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:31.287 [2024-07-24 23:17:53.562375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.545 23:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.545 23:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:31.545 23:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:31.545 23:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:17:32.110 23:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.110 23:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.110 Running I/O for 1 seconds... 00:17:33.043 00:17:33.043 Latency(us) 00:17:33.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.043 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:33.043 Verification LBA range: start 0x0 length 0x2000 00:17:33.043 nvme0n1 : 1.02 4129.69 16.13 0.00 0.00 30677.17 7983.48 19303.33 00:17:33.043 =================================================================================================================== 00:17:33.043 Total : 4129.69 16.13 0.00 0.00 30677.17 7983.48 19303.33 00:17:33.043 0 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:33.043 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.043 nvmf_trace.0 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74451 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74451 ']' 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74451 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74451 00:17:33.302 killing process with pid 74451 00:17:33.302 Received shutdown signal, test time was about 1.000000 seconds 00:17:33.302 00:17:33.302 Latency(us) 00:17:33.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.302 =================================================================================================================== 00:17:33.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74451' 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74451 00:17:33.302 23:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74451 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:33.561 rmmod nvme_tcp 00:17:33.561 rmmod nvme_fabrics 00:17:33.561 rmmod nvme_keyring 00:17:33.561 23:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74418 ']' 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74418 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74418 ']' 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74418 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74418 00:17:33.561 killing process with pid 74418 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74418' 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74418 00:17:33.561 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74418 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QdyAp5Yl9N /tmp/tmp.VQWvNXQ9w7 /tmp/tmp.wEpgwBz0EC 00:17:34.128 00:17:34.128 real 1m27.759s 00:17:34.128 user 2m15.102s 00:17:34.128 sys 0m30.715s 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.128 ************************************ 00:17:34.128 END TEST nvmf_tls 00:17:34.128 ************************************ 00:17:34.128 23:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.128 23:17:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:34.128 23:17:56 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:34.128 23:17:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:34.128 23:17:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.128 23:17:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:34.128 ************************************ 00:17:34.128 START TEST nvmf_fips 00:17:34.128 ************************************ 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:34.128 * Looking for test storage... 00:17:34.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.128 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:34.129 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:34.388 Error setting digest 00:17:34.388 00D2FD53D77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:34.388 00D2FD53D77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:34.388 Cannot find device "nvmf_tgt_br" 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.388 Cannot find device "nvmf_tgt_br2" 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:34.388 Cannot find device "nvmf_tgt_br" 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:34.388 Cannot find device "nvmf_tgt_br2" 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:34.388 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:34.647 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.647 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:34.647 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.647 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:34.647 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.647 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.648 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.648 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.648 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.648 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.648 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.648 23:17:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:34.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:17:34.648 00:17:34.648 --- 10.0.0.2 ping statistics --- 00:17:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.648 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:34.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:34.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:34.648 00:17:34.648 --- 10.0.0.3 ping statistics --- 00:17:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.648 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:34.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:34.648 00:17:34.648 --- 10.0.0.1 ping statistics --- 00:17:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.648 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.648 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74715 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74715 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74715 ']' 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.907 23:17:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:34.907 [2024-07-24 23:17:57.243313] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:34.907 [2024-07-24 23:17:57.243412] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.907 [2024-07-24 23:17:57.387733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.165 [2024-07-24 23:17:57.537638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.165 [2024-07-24 23:17:57.537717] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.165 [2024-07-24 23:17:57.537732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.165 [2024-07-24 23:17:57.537744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.165 [2024-07-24 23:17:57.537753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.165 [2024-07-24 23:17:57.537794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.165 [2024-07-24 23:17:57.615292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.099 [2024-07-24 23:17:58.481526] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.099 [2024-07-24 23:17:58.497380] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:36.099 [2024-07-24 23:17:58.497586] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.099 [2024-07-24 23:17:58.532805] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:36.099 malloc0 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74754 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74754 /var/tmp/bdevperf.sock 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74754 ']' 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.099 23:17:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:36.357 [2024-07-24 23:17:58.629881] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:36.357 [2024-07-24 23:17:58.629988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74754 ] 00:17:36.357 [2024-07-24 23:17:58.763868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.614 [2024-07-24 23:17:58.909334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.614 [2024-07-24 23:17:58.985741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:37.179 23:17:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.179 23:17:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:17:37.179 23:17:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:37.437 [2024-07-24 23:17:59.805944] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.437 [2024-07-24 23:17:59.806080] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.437 TLSTESTn1 00:17:37.437 23:17:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.696 Running I/O for 10 seconds... 00:17:47.663 00:17:47.663 Latency(us) 00:17:47.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.663 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.663 Verification LBA range: start 0x0 length 0x2000 00:17:47.663 TLSTESTn1 : 10.02 3959.16 15.47 0.00 0.00 32266.21 8162.21 20971.52 00:17:47.663 =================================================================================================================== 00:17:47.663 Total : 3959.16 15.47 0.00 0.00 32266.21 8162.21 20971.52 00:17:47.663 0 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:47.663 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:47.663 nvmf_trace.0 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74754 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74754 ']' 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74754 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74754 00:17:47.921 killing process with pid 74754 00:17:47.921 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.921 00:17:47.921 Latency(us) 00:17:47.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.921 =================================================================================================================== 00:17:47.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74754' 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74754 00:17:47.921 [2024-07-24 23:18:10.191410] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.921 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74754 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:48.180 rmmod nvme_tcp 00:17:48.180 rmmod nvme_fabrics 00:17:48.180 rmmod nvme_keyring 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74715 ']' 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74715 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74715 ']' 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74715 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74715 00:17:48.180 killing process with pid 74715 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74715' 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74715 00:17:48.180 [2024-07-24 23:18:10.635247] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:48.180 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74715 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.748 23:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.748 23:18:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:48.748 23:18:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:48.748 00:17:48.748 real 0m14.562s 00:17:48.748 user 0m19.569s 00:17:48.748 sys 0m6.032s 00:17:48.748 23:18:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.748 23:18:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:48.748 ************************************ 00:17:48.748 END TEST nvmf_fips 00:17:48.748 ************************************ 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:48.748 23:18:11 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:17:48.748 23:18:11 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:17:48.748 23:18:11 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.748 23:18:11 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.748 23:18:11 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:17:48.748 23:18:11 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.748 23:18:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.748 ************************************ 00:17:48.748 START TEST nvmf_identify 00:17:48.748 ************************************ 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:48.748 * Looking for test storage... 00:17:48.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.748 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.006 23:18:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.006 23:18:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.006 23:18:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.006 23:18:11 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:18:11 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:18:11 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:18:11 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:49.007 23:18:11 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.007 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:49.007 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:49.032 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:49.033 Cannot find device "nvmf_tgt_br" 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.033 Cannot find device "nvmf_tgt_br2" 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:49.033 Cannot find device "nvmf_tgt_br" 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:49.033 Cannot find device "nvmf_tgt_br2" 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.033 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:49.033 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:49.291 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:49.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:17:49.292 00:17:49.292 --- 10.0.0.2 ping statistics --- 00:17:49.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.292 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:49.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:49.292 00:17:49.292 --- 10.0.0.3 ping statistics --- 00:17:49.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.292 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:49.292 00:17:49.292 --- 10.0.0.1 ping statistics --- 00:17:49.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.292 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=75099 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 75099 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 75099 ']' 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.292 23:18:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.292 [2024-07-24 23:18:11.664216] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:49.292 [2024-07-24 23:18:11.664326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.550 [2024-07-24 23:18:11.806243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.550 [2024-07-24 23:18:11.971913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.550 [2024-07-24 23:18:11.971994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.550 [2024-07-24 23:18:11.972058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.550 [2024-07-24 23:18:11.972071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.550 [2024-07-24 23:18:11.972080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.550 [2024-07-24 23:18:11.972268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.550 [2024-07-24 23:18:11.972433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.550 [2024-07-24 23:18:11.973026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.550 [2024-07-24 23:18:11.973074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.808 [2024-07-24 23:18:12.049424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 [2024-07-24 23:18:12.699809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 Malloc0 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 [2024-07-24 23:18:12.825743] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.395 [ 00:17:50.395 { 00:17:50.395 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:50.395 "subtype": "Discovery", 00:17:50.395 "listen_addresses": [ 00:17:50.395 { 00:17:50.395 "trtype": "TCP", 00:17:50.395 "adrfam": "IPv4", 00:17:50.395 "traddr": "10.0.0.2", 00:17:50.395 "trsvcid": "4420" 00:17:50.395 } 00:17:50.395 ], 00:17:50.395 "allow_any_host": true, 00:17:50.395 "hosts": [] 00:17:50.395 }, 00:17:50.395 { 00:17:50.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.395 "subtype": "NVMe", 00:17:50.395 "listen_addresses": [ 00:17:50.395 { 00:17:50.395 "trtype": "TCP", 00:17:50.395 "adrfam": "IPv4", 00:17:50.395 "traddr": "10.0.0.2", 00:17:50.395 "trsvcid": "4420" 00:17:50.395 } 00:17:50.395 ], 00:17:50.395 "allow_any_host": true, 00:17:50.395 "hosts": [], 00:17:50.395 "serial_number": "SPDK00000000000001", 00:17:50.395 "model_number": "SPDK bdev Controller", 00:17:50.395 "max_namespaces": 32, 00:17:50.395 "min_cntlid": 1, 00:17:50.395 "max_cntlid": 65519, 00:17:50.395 "namespaces": [ 00:17:50.395 { 00:17:50.395 "nsid": 1, 00:17:50.395 "bdev_name": "Malloc0", 00:17:50.395 "name": "Malloc0", 00:17:50.395 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:50.395 "eui64": "ABCDEF0123456789", 00:17:50.395 "uuid": "dbd904a1-7734-42f7-bc71-88ff0d30c4ef" 00:17:50.395 } 00:17:50.395 ] 00:17:50.395 } 00:17:50.395 ] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.395 23:18:12 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:50.656 [2024-07-24 23:18:12.880363] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:50.656 [2024-07-24 23:18:12.880428] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75135 ] 00:17:50.656 [2024-07-24 23:18:13.020895] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:50.656 [2024-07-24 23:18:13.020980] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.656 [2024-07-24 23:18:13.020988] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.656 [2024-07-24 23:18:13.021001] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.656 [2024-07-24 23:18:13.021010] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.656 [2024-07-24 23:18:13.025269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:50.656 [2024-07-24 23:18:13.025338] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x129c2c0 0 00:17:50.656 [2024-07-24 23:18:13.033191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.656 [2024-07-24 23:18:13.033215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.656 [2024-07-24 23:18:13.033237] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.656 [2024-07-24 23:18:13.033242] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.656 [2024-07-24 23:18:13.033296] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.033304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.033309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.033326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.656 [2024-07-24 23:18:13.033358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.041242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.041263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.041269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.041289] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:50.656 [2024-07-24 23:18:13.041298] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:50.656 [2024-07-24 23:18:13.041304] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:50.656 [2024-07-24 23:18:13.041324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.041344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.656 [2024-07-24 23:18:13.041372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.041439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.041446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.041450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.041461] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:50.656 [2024-07-24 23:18:13.041469] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:50.656 [2024-07-24 23:18:13.041477] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.041493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.656 [2024-07-24 23:18:13.041543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.041593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.041600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.041604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.041615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:50.656 [2024-07-24 23:18:13.041624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.656 [2024-07-24 23:18:13.041631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.041647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.656 [2024-07-24 23:18:13.041665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.041712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.041719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.041723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.041733] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.656 [2024-07-24 23:18:13.041744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.041760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.656 [2024-07-24 23:18:13.041777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.041829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.041837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.041841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.041851] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:50.656 [2024-07-24 23:18:13.041856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:50.656 [2024-07-24 23:18:13.041864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.656 [2024-07-24 23:18:13.041970] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:50.656 [2024-07-24 23:18:13.041976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.656 [2024-07-24 23:18:13.041990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.041998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.042006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.656 [2024-07-24 23:18:13.042025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.042073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.042081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.042085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.042089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.042094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.656 [2024-07-24 23:18:13.042105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.042109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.042113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.656 [2024-07-24 23:18:13.042121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.656 [2024-07-24 23:18:13.042138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.656 [2024-07-24 23:18:13.042200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.656 [2024-07-24 23:18:13.042209] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.656 [2024-07-24 23:18:13.042213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.042218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.656 [2024-07-24 23:18:13.042223] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.656 [2024-07-24 23:18:13.042228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:50.656 [2024-07-24 23:18:13.042237] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:50.656 [2024-07-24 23:18:13.042248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.656 [2024-07-24 23:18:13.042259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.656 [2024-07-24 23:18:13.042264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.657 [2024-07-24 23:18:13.042293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.657 [2024-07-24 23:18:13.042392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.657 [2024-07-24 23:18:13.042400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.657 [2024-07-24 23:18:13.042404] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042409] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129c2c0): datao=0, datal=4096, cccid=0 00:17:50.657 [2024-07-24 23:18:13.042414] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12dd940) on tqpair(0x129c2c0): expected_datao=0, payload_size=4096 00:17:50.657 [2024-07-24 23:18:13.042419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042428] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042433] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.657 [2024-07-24 23:18:13.042449] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.657 [2024-07-24 23:18:13.042453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.657 [2024-07-24 23:18:13.042466] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:50.657 [2024-07-24 23:18:13.042472] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:50.657 [2024-07-24 23:18:13.042477] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:50.657 [2024-07-24 23:18:13.042483] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:50.657 [2024-07-24 23:18:13.042488] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:50.657 [2024-07-24 23:18:13.042493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:50.657 [2024-07-24 23:18:13.042502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.657 [2024-07-24 23:18:13.042510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042519] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.657 [2024-07-24 23:18:13.042545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.657 [2024-07-24 23:18:13.042605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.657 [2024-07-24 23:18:13.042612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.657 [2024-07-24 23:18:13.042616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.657 [2024-07-24 23:18:13.042629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.657 [2024-07-24 23:18:13.042652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042656] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.657 [2024-07-24 23:18:13.042673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.657 [2024-07-24 23:18:13.042693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.657 [2024-07-24 23:18:13.042712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.657 [2024-07-24 23:18:13.042727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.657 [2024-07-24 23:18:13.042734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.657 [2024-07-24 23:18:13.042766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dd940, cid 0, qid 0 00:17:50.657 [2024-07-24 23:18:13.042773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddac0, cid 1, qid 0 00:17:50.657 [2024-07-24 23:18:13.042778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddc40, cid 2, qid 0 00:17:50.657 [2024-07-24 23:18:13.042783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.657 [2024-07-24 23:18:13.042788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddf40, cid 4, qid 0 00:17:50.657 [2024-07-24 23:18:13.042873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.657 [2024-07-24 23:18:13.042880] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.657 [2024-07-24 23:18:13.042884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddf40) on tqpair=0x129c2c0 00:17:50.657 [2024-07-24 23:18:13.042894] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:50.657 [2024-07-24 23:18:13.042905] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:50.657 [2024-07-24 23:18:13.042917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.042922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.042929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.657 [2024-07-24 23:18:13.042948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddf40, cid 4, qid 0 00:17:50.657 [2024-07-24 23:18:13.043002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.657 [2024-07-24 23:18:13.043009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.657 [2024-07-24 23:18:13.043013] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043017] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129c2c0): datao=0, datal=4096, cccid=4 00:17:50.657 [2024-07-24 23:18:13.043022] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ddf40) on tqpair(0x129c2c0): expected_datao=0, payload_size=4096 00:17:50.657 [2024-07-24 23:18:13.043027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043034] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043039] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.657 [2024-07-24 23:18:13.043053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.657 [2024-07-24 23:18:13.043057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddf40) on tqpair=0x129c2c0 00:17:50.657 [2024-07-24 23:18:13.043075] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:50.657 [2024-07-24 23:18:13.043108] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.043122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.657 [2024-07-24 23:18:13.043152] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129c2c0) 00:17:50.657 [2024-07-24 23:18:13.043170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.657 [2024-07-24 23:18:13.043199] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddf40, cid 4, qid 0 00:17:50.657 [2024-07-24 23:18:13.043207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12de0c0, cid 5, qid 0 00:17:50.657 [2024-07-24 23:18:13.043319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.657 [2024-07-24 23:18:13.043327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.657 [2024-07-24 23:18:13.043331] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043335] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129c2c0): datao=0, datal=1024, cccid=4 00:17:50.657 [2024-07-24 23:18:13.043340] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ddf40) on tqpair(0x129c2c0): expected_datao=0, payload_size=1024 00:17:50.657 [2024-07-24 23:18:13.043344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043351] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043356] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.657 [2024-07-24 23:18:13.043362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.658 [2024-07-24 23:18:13.043368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.658 [2024-07-24 23:18:13.043372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12de0c0) on tqpair=0x129c2c0 00:17:50.658 [2024-07-24 23:18:13.043393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.658 [2024-07-24 23:18:13.043401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.658 [2024-07-24 23:18:13.043405] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddf40) on tqpair=0x129c2c0 00:17:50.658 [2024-07-24 23:18:13.043423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129c2c0) 00:17:50.658 [2024-07-24 23:18:13.043435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.658 [2024-07-24 23:18:13.043459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddf40, cid 4, qid 0 00:17:50.658 [2024-07-24 23:18:13.043524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.658 [2024-07-24 23:18:13.043531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.658 [2024-07-24 23:18:13.043535] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129c2c0): datao=0, datal=3072, cccid=4 00:17:50.658 [2024-07-24 23:18:13.043544] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ddf40) on tqpair(0x129c2c0): expected_datao=0, payload_size=3072 00:17:50.658 [2024-07-24 23:18:13.043549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043556] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043560] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.658 [2024-07-24 23:18:13.043575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.658 [2024-07-24 23:18:13.043579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddf40) on tqpair=0x129c2c0 00:17:50.658 [2024-07-24 23:18:13.043593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129c2c0) 00:17:50.658 [2024-07-24 23:18:13.043605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.658 [2024-07-24 23:18:13.043628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12ddf40, cid 4, qid 0 00:17:50.658 [2024-07-24 23:18:13.043691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.658 [2024-07-24 23:18:13.043698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.658 [2024-07-24 23:18:13.043702] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.658 ===================================================== 00:17:50.658 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:50.658 ===================================================== 00:17:50.658 Controller Capabilities/Features 00:17:50.658 ================================ 00:17:50.658 Vendor ID: 0000 00:17:50.658 Subsystem Vendor ID: 0000 00:17:50.658 Serial Number: .................... 00:17:50.658 Model Number: ........................................ 00:17:50.658 Firmware Version: 24.09 00:17:50.658 Recommended Arb Burst: 0 00:17:50.658 IEEE OUI Identifier: 00 00 00 00:17:50.658 Multi-path I/O 00:17:50.658 May have multiple subsystem ports: No 00:17:50.658 May have multiple controllers: No 00:17:50.658 Associated with SR-IOV VF: No 00:17:50.658 Max Data Transfer Size: 131072 00:17:50.658 Max Number of Namespaces: 0 00:17:50.658 Max Number of I/O Queues: 1024 00:17:50.658 NVMe Specification Version (VS): 1.3 00:17:50.658 NVMe Specification Version (Identify): 1.3 00:17:50.658 Maximum Queue Entries: 128 00:17:50.658 Contiguous Queues Required: Yes 00:17:50.658 Arbitration Mechanisms Supported 00:17:50.658 Weighted Round Robin: Not Supported 00:17:50.658 Vendor Specific: Not Supported 00:17:50.658 Reset Timeout: 15000 ms 00:17:50.658 Doorbell Stride: 4 bytes 00:17:50.658 NVM Subsystem Reset: Not Supported 00:17:50.658 Command Sets Supported 00:17:50.658 NVM Command Set: Supported 00:17:50.658 Boot Partition: Not Supported 00:17:50.658 Memory Page Size Minimum: 4096 bytes 00:17:50.658 Memory Page Size Maximum: 4096 bytes 00:17:50.658 Persistent Memory Region: Not Supported 00:17:50.658 Optional Asynchronous Events Supported 00:17:50.658 Namespace Attribute Notices: Not Supported 00:17:50.658 Firmware Activation Notices: Not Supported 00:17:50.658 ANA Change Notices: Not Supported 00:17:50.658 PLE Aggregate Log Change Notices: Not Supported 00:17:50.658 LBA Status Info Alert Notices: Not Supported 00:17:50.658 EGE Aggregate Log Change Notices: Not Supported 00:17:50.658 Normal NVM Subsystem Shutdown event: Not Supported 00:17:50.658 Zone Descriptor Change Notices: Not Supported 00:17:50.658 Discovery Log Change Notices: Supported 00:17:50.658 Controller Attributes 00:17:50.658 128-bit Host Identifier: Not Supported 00:17:50.658 Non-Operational Permissive Mode: Not Supported 00:17:50.658 NVM Sets: Not Supported 00:17:50.658 Read Recovery Levels: Not Supported 00:17:50.658 Endurance Groups: Not Supported 00:17:50.658 Predictable Latency Mode: Not Supported 00:17:50.658 Traffic Based Keep ALive: Not Supported 00:17:50.658 Namespace Granularity: Not Supported 00:17:50.658 SQ Associations: Not Supported 00:17:50.658 UUID List: Not Supported 00:17:50.658 Multi-Domain Subsystem: Not Supported 00:17:50.658 Fixed Capacity Management: Not Supported 00:17:50.658 Variable Capacity Management: Not Supported 00:17:50.658 Delete Endurance Group: Not Supported 00:17:50.658 Delete NVM Set: Not Supported 00:17:50.658 Extended LBA Formats Supported: Not Supported 00:17:50.658 Flexible Data Placement Supported: Not Supported 00:17:50.658 00:17:50.658 Controller Memory Buffer Support 00:17:50.658 ================================ 00:17:50.658 Supported: No 00:17:50.658 00:17:50.658 Persistent Memory Region Support 00:17:50.658 ================================ 00:17:50.658 Supported: No 00:17:50.658 00:17:50.658 Admin Command Set Attributes 00:17:50.658 ============================ 00:17:50.658 Security Send/Receive: Not Supported 00:17:50.658 Format NVM: Not Supported 00:17:50.658 Firmware Activate/Download: Not Supported 00:17:50.658 Namespace Management: Not Supported 00:17:50.658 Device Self-Test: Not Supported 00:17:50.658 Directives: Not Supported 00:17:50.658 NVMe-MI: Not Supported 00:17:50.658 Virtualization Management: Not Supported 00:17:50.658 Doorbell Buffer Config: Not Supported 00:17:50.658 Get LBA Status Capability: Not Supported 00:17:50.658 Command & Feature Lockdown Capability: Not Supported 00:17:50.658 Abort Command Limit: 1 00:17:50.658 Async Event Request Limit: 4 00:17:50.658 Number of Firmware Slots: N/A 00:17:50.658 Firmware Slot 1 Read-Only: N/A 00:17:50.658 Firm[2024-07-24 23:18:13.043706] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129c2c0): datao=0, datal=8, cccid=4 00:17:50.658 [2024-07-24 23:18:13.043711] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12ddf40) on tqpair(0x129c2c0): expected_datao=0, payload_size=8 00:17:50.658 [2024-07-24 23:18:13.043716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043723] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043727] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.658 [2024-07-24 23:18:13.043749] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.658 [2024-07-24 23:18:13.043753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.658 [2024-07-24 23:18:13.043758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddf40) on tqpair=0x129c2c0 00:17:50.658 ware Activation Without Reset: N/A 00:17:50.658 Multiple Update Detection Support: N/A 00:17:50.658 Firmware Update Granularity: No Information Provided 00:17:50.658 Per-Namespace SMART Log: No 00:17:50.658 Asymmetric Namespace Access Log Page: Not Supported 00:17:50.658 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:50.658 Command Effects Log Page: Not Supported 00:17:50.658 Get Log Page Extended Data: Supported 00:17:50.658 Telemetry Log Pages: Not Supported 00:17:50.658 Persistent Event Log Pages: Not Supported 00:17:50.658 Supported Log Pages Log Page: May Support 00:17:50.658 Commands Supported & Effects Log Page: Not Supported 00:17:50.658 Feature Identifiers & Effects Log Page:May Support 00:17:50.658 NVMe-MI Commands & Effects Log Page: May Support 00:17:50.658 Data Area 4 for Telemetry Log: Not Supported 00:17:50.658 Error Log Page Entries Supported: 128 00:17:50.658 Keep Alive: Not Supported 00:17:50.658 00:17:50.658 NVM Command Set Attributes 00:17:50.658 ========================== 00:17:50.658 Submission Queue Entry Size 00:17:50.658 Max: 1 00:17:50.658 Min: 1 00:17:50.658 Completion Queue Entry Size 00:17:50.658 Max: 1 00:17:50.658 Min: 1 00:17:50.658 Number of Namespaces: 0 00:17:50.658 Compare Command: Not Supported 00:17:50.659 Write Uncorrectable Command: Not Supported 00:17:50.659 Dataset Management Command: Not Supported 00:17:50.659 Write Zeroes Command: Not Supported 00:17:50.659 Set Features Save Field: Not Supported 00:17:50.659 Reservations: Not Supported 00:17:50.659 Timestamp: Not Supported 00:17:50.659 Copy: Not Supported 00:17:50.659 Volatile Write Cache: Not Present 00:17:50.659 Atomic Write Unit (Normal): 1 00:17:50.659 Atomic Write Unit (PFail): 1 00:17:50.659 Atomic Compare & Write Unit: 1 00:17:50.659 Fused Compare & Write: Supported 00:17:50.659 Scatter-Gather List 00:17:50.659 SGL Command Set: Supported 00:17:50.659 SGL Keyed: Supported 00:17:50.659 SGL Bit Bucket Descriptor: Not Supported 00:17:50.659 SGL Metadata Pointer: Not Supported 00:17:50.659 Oversized SGL: Not Supported 00:17:50.659 SGL Metadata Address: Not Supported 00:17:50.659 SGL Offset: Supported 00:17:50.659 Transport SGL Data Block: Not Supported 00:17:50.659 Replay Protected Memory Block: Not Supported 00:17:50.659 00:17:50.659 Firmware Slot Information 00:17:50.659 ========================= 00:17:50.659 Active slot: 0 00:17:50.659 00:17:50.659 00:17:50.659 Error Log 00:17:50.659 ========= 00:17:50.659 00:17:50.659 Active Namespaces 00:17:50.659 ================= 00:17:50.659 Discovery Log Page 00:17:50.659 ================== 00:17:50.659 Generation Counter: 2 00:17:50.659 Number of Records: 2 00:17:50.659 Record Format: 0 00:17:50.659 00:17:50.659 Discovery Log Entry 0 00:17:50.659 ---------------------- 00:17:50.659 Transport Type: 3 (TCP) 00:17:50.659 Address Family: 1 (IPv4) 00:17:50.659 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:50.659 Entry Flags: 00:17:50.659 Duplicate Returned Information: 1 00:17:50.659 Explicit Persistent Connection Support for Discovery: 1 00:17:50.659 Transport Requirements: 00:17:50.659 Secure Channel: Not Required 00:17:50.659 Port ID: 0 (0x0000) 00:17:50.659 Controller ID: 65535 (0xffff) 00:17:50.659 Admin Max SQ Size: 128 00:17:50.659 Transport Service Identifier: 4420 00:17:50.659 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:50.659 Transport Address: 10.0.0.2 00:17:50.659 Discovery Log Entry 1 00:17:50.659 ---------------------- 00:17:50.659 Transport Type: 3 (TCP) 00:17:50.659 Address Family: 1 (IPv4) 00:17:50.659 Subsystem Type: 2 (NVM Subsystem) 00:17:50.659 Entry Flags: 00:17:50.659 Duplicate Returned Information: 0 00:17:50.659 Explicit Persistent Connection Support for Discovery: 0 00:17:50.659 Transport Requirements: 00:17:50.659 Secure Channel: Not Required 00:17:50.659 Port ID: 0 (0x0000) 00:17:50.659 Controller ID: 65535 (0xffff) 00:17:50.659 Admin Max SQ Size: 128 00:17:50.659 Transport Service Identifier: 4420 00:17:50.659 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:50.659 Transport Address: 10.0.0.2 [2024-07-24 23:18:13.043875] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:50.659 [2024-07-24 23:18:13.043889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dd940) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.043897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.659 [2024-07-24 23:18:13.043903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddac0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.043908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.659 [2024-07-24 23:18:13.043913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12ddc40) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.043918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.659 [2024-07-24 23:18:13.043924] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.043928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.659 [2024-07-24 23:18:13.043938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.043943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.043946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.659 [2024-07-24 23:18:13.043955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.659 [2024-07-24 23:18:13.043978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.659 [2024-07-24 23:18:13.044039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.659 [2024-07-24 23:18:13.044048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.659 [2024-07-24 23:18:13.044052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.044065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.659 [2024-07-24 23:18:13.044081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.659 [2024-07-24 23:18:13.044108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.659 [2024-07-24 23:18:13.044191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.659 [2024-07-24 23:18:13.044200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.659 [2024-07-24 23:18:13.044204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.044214] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:50.659 [2024-07-24 23:18:13.044219] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:50.659 [2024-07-24 23:18:13.044230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.659 [2024-07-24 23:18:13.044246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.659 [2024-07-24 23:18:13.044266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.659 [2024-07-24 23:18:13.044314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.659 [2024-07-24 23:18:13.044322] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.659 [2024-07-24 23:18:13.044326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.044341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.659 [2024-07-24 23:18:13.044357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.659 [2024-07-24 23:18:13.044374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.659 [2024-07-24 23:18:13.044422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.659 [2024-07-24 23:18:13.044429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.659 [2024-07-24 23:18:13.044433] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.044447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044452] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.659 [2024-07-24 23:18:13.044463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.659 [2024-07-24 23:18:13.044480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.659 [2024-07-24 23:18:13.044527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.659 [2024-07-24 23:18:13.044534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.659 [2024-07-24 23:18:13.044538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044543] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.659 [2024-07-24 23:18:13.044553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.659 [2024-07-24 23:18:13.044562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.659 [2024-07-24 23:18:13.044569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.659 [2024-07-24 23:18:13.044585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.659 [2024-07-24 23:18:13.044635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.659 [2024-07-24 23:18:13.044642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.659 [2024-07-24 23:18:13.044646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.044660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.660 [2024-07-24 23:18:13.044676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.660 [2024-07-24 23:18:13.044693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.660 [2024-07-24 23:18:13.044741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.660 [2024-07-24 23:18:13.044748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.660 [2024-07-24 23:18:13.044752] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.044767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.660 [2024-07-24 23:18:13.044783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.660 [2024-07-24 23:18:13.044799] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.660 [2024-07-24 23:18:13.044850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.660 [2024-07-24 23:18:13.044857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.660 [2024-07-24 23:18:13.044861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.044875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.660 [2024-07-24 23:18:13.044891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.660 [2024-07-24 23:18:13.044908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.660 [2024-07-24 23:18:13.044953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.660 [2024-07-24 23:18:13.044960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.660 [2024-07-24 23:18:13.044964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.044979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044983] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.044987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.660 [2024-07-24 23:18:13.044994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.660 [2024-07-24 23:18:13.045011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.660 [2024-07-24 23:18:13.045056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.660 [2024-07-24 23:18:13.045063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.660 [2024-07-24 23:18:13.045067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.045071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.045082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.045086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.045090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.660 [2024-07-24 23:18:13.045097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.660 [2024-07-24 23:18:13.045114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.660 [2024-07-24 23:18:13.049147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.660 [2024-07-24 23:18:13.049168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.660 [2024-07-24 23:18:13.049173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.049178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.049193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.049198] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.049202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129c2c0) 00:17:50.660 [2024-07-24 23:18:13.049211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.660 [2024-07-24 23:18:13.049236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12dddc0, cid 3, qid 0 00:17:50.660 [2024-07-24 23:18:13.049287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.660 [2024-07-24 23:18:13.049294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.660 [2024-07-24 23:18:13.049298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.660 [2024-07-24 23:18:13.049303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12dddc0) on tqpair=0x129c2c0 00:17:50.660 [2024-07-24 23:18:13.049311] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:50.660 00:17:50.660 23:18:13 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:50.660 [2024-07-24 23:18:13.096112] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:50.660 [2024-07-24 23:18:13.096188] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75137 ] 00:17:50.923 [2024-07-24 23:18:13.237713] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:50.923 [2024-07-24 23:18:13.237783] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:50.923 [2024-07-24 23:18:13.237791] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:50.923 [2024-07-24 23:18:13.237806] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:50.923 [2024-07-24 23:18:13.237814] sock.c: 353:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:50.923 [2024-07-24 23:18:13.237967] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:50.923 [2024-07-24 23:18:13.238035] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5662c0 0 00:17:50.923 [2024-07-24 23:18:13.253217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:50.923 [2024-07-24 23:18:13.253239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:50.923 [2024-07-24 23:18:13.253261] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:50.923 [2024-07-24 23:18:13.253265] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:50.923 [2024-07-24 23:18:13.253330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.253337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.253341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.923 [2024-07-24 23:18:13.253355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:50.923 [2024-07-24 23:18:13.253387] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.923 [2024-07-24 23:18:13.261235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.923 [2024-07-24 23:18:13.261273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.923 [2024-07-24 23:18:13.261278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.923 [2024-07-24 23:18:13.261293] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:50.923 [2024-07-24 23:18:13.261302] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:50.923 [2024-07-24 23:18:13.261309] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:50.923 [2024-07-24 23:18:13.261329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.923 [2024-07-24 23:18:13.261348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.923 [2024-07-24 23:18:13.261390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.923 [2024-07-24 23:18:13.261444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.923 [2024-07-24 23:18:13.261451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.923 [2024-07-24 23:18:13.261455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.923 [2024-07-24 23:18:13.261477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:50.923 [2024-07-24 23:18:13.261485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:50.923 [2024-07-24 23:18:13.261494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.923 [2024-07-24 23:18:13.261510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.923 [2024-07-24 23:18:13.261530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.923 [2024-07-24 23:18:13.261576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.923 [2024-07-24 23:18:13.261583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.923 [2024-07-24 23:18:13.261587] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.923 [2024-07-24 23:18:13.261598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:50.923 [2024-07-24 23:18:13.261606] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:50.923 [2024-07-24 23:18:13.261614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.923 [2024-07-24 23:18:13.261623] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.923 [2024-07-24 23:18:13.261631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.923 [2024-07-24 23:18:13.261651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.924 [2024-07-24 23:18:13.261700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.924 [2024-07-24 23:18:13.261707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.924 [2024-07-24 23:18:13.261711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.261715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.924 [2024-07-24 23:18:13.261721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:50.924 [2024-07-24 23:18:13.261732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.261737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.261741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.924 [2024-07-24 23:18:13.261748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.924 [2024-07-24 23:18:13.261767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.924 [2024-07-24 23:18:13.261813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.924 [2024-07-24 23:18:13.261820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.924 [2024-07-24 23:18:13.261824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.261828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.924 [2024-07-24 23:18:13.261833] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:50.924 [2024-07-24 23:18:13.261839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:50.924 [2024-07-24 23:18:13.261847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:50.924 [2024-07-24 23:18:13.261953] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:50.924 [2024-07-24 23:18:13.261958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:50.924 [2024-07-24 23:18:13.261968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.261973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.261977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.924 [2024-07-24 23:18:13.261984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.924 [2024-07-24 23:18:13.262004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.924 [2024-07-24 23:18:13.262054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.924 [2024-07-24 23:18:13.262061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.924 [2024-07-24 23:18:13.262064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.924 [2024-07-24 23:18:13.262074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:50.924 [2024-07-24 23:18:13.262085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.924 [2024-07-24 23:18:13.262101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.924 [2024-07-24 23:18:13.262120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.924 [2024-07-24 23:18:13.262170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.924 [2024-07-24 23:18:13.262179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.924 [2024-07-24 23:18:13.262183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.924 [2024-07-24 23:18:13.262193] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:50.924 [2024-07-24 23:18:13.262198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:50.924 [2024-07-24 23:18:13.262207] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:50.924 [2024-07-24 23:18:13.262218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:50.924 [2024-07-24 23:18:13.262241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.924 [2024-07-24 23:18:13.262253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.924 [2024-07-24 23:18:13.262274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.924 [2024-07-24 23:18:13.262367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.924 [2024-07-24 23:18:13.262374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.924 [2024-07-24 23:18:13.262378] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262382] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=4096, cccid=0 00:17:50.924 [2024-07-24 23:18:13.262388] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a7940) on tqpair(0x5662c0): expected_datao=0, payload_size=4096 00:17:50.924 [2024-07-24 23:18:13.262393] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262401] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262406] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.924 [2024-07-24 23:18:13.262420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.924 [2024-07-24 23:18:13.262424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.924 [2024-07-24 23:18:13.262437] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:50.924 [2024-07-24 23:18:13.262442] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:50.924 [2024-07-24 23:18:13.262447] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:50.924 [2024-07-24 23:18:13.262452] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:50.924 [2024-07-24 23:18:13.262456] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:50.924 [2024-07-24 23:18:13.262479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:50.924 [2024-07-24 23:18:13.262488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:50.924 [2024-07-24 23:18:13.262496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.924 [2024-07-24 23:18:13.262505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.924 [2024-07-24 23:18:13.262513] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.924 [2024-07-24 23:18:13.262534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.925 [2024-07-24 23:18:13.262584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.925 [2024-07-24 23:18:13.262591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.925 [2024-07-24 23:18:13.262595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.925 [2024-07-24 23:18:13.262607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.262623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.925 [2024-07-24 23:18:13.262629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262633] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.262644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.925 [2024-07-24 23:18:13.262650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262654] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262658] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.262664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.925 [2024-07-24 23:18:13.262672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.262686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.925 [2024-07-24 23:18:13.262691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.262706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.262714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.262726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.925 [2024-07-24 23:18:13.262747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7940, cid 0, qid 0 00:17:50.925 [2024-07-24 23:18:13.262755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7ac0, cid 1, qid 0 00:17:50.925 [2024-07-24 23:18:13.262760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7c40, cid 2, qid 0 00:17:50.925 [2024-07-24 23:18:13.262765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.925 [2024-07-24 23:18:13.262770] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.925 [2024-07-24 23:18:13.262857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.925 [2024-07-24 23:18:13.262864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.925 [2024-07-24 23:18:13.262868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.925 [2024-07-24 23:18:13.262878] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:50.925 [2024-07-24 23:18:13.262888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.262897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.262905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.262912] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.262921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.262928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:50.925 [2024-07-24 23:18:13.262948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.925 [2024-07-24 23:18:13.262997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.925 [2024-07-24 23:18:13.263004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.925 [2024-07-24 23:18:13.263008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.925 [2024-07-24 23:18:13.263074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.263086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.263094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.263106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.925 [2024-07-24 23:18:13.263126] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.925 [2024-07-24 23:18:13.263203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.925 [2024-07-24 23:18:13.263212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.925 [2024-07-24 23:18:13.263216] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263220] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=4096, cccid=4 00:17:50.925 [2024-07-24 23:18:13.263225] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a7f40) on tqpair(0x5662c0): expected_datao=0, payload_size=4096 00:17:50.925 [2024-07-24 23:18:13.263230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263237] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263242] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.925 [2024-07-24 23:18:13.263257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.925 [2024-07-24 23:18:13.263260] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.925 [2024-07-24 23:18:13.263282] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:50.925 [2024-07-24 23:18:13.263295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.263306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:50.925 [2024-07-24 23:18:13.263315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.925 [2024-07-24 23:18:13.263319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.925 [2024-07-24 23:18:13.263327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.926 [2024-07-24 23:18:13.263349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.926 [2024-07-24 23:18:13.263431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.926 [2024-07-24 23:18:13.263440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.926 [2024-07-24 23:18:13.263444] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=4096, cccid=4 00:17:50.926 [2024-07-24 23:18:13.263453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a7f40) on tqpair(0x5662c0): expected_datao=0, payload_size=4096 00:17:50.926 [2024-07-24 23:18:13.263458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263465] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263469] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.926 [2024-07-24 23:18:13.263484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.926 [2024-07-24 23:18:13.263488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.926 [2024-07-24 23:18:13.263510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.926 [2024-07-24 23:18:13.263544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.926 [2024-07-24 23:18:13.263565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.926 [2024-07-24 23:18:13.263621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.926 [2024-07-24 23:18:13.263629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.926 [2024-07-24 23:18:13.263633] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263636] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=4096, cccid=4 00:17:50.926 [2024-07-24 23:18:13.263641] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a7f40) on tqpair(0x5662c0): expected_datao=0, payload_size=4096 00:17:50.926 [2024-07-24 23:18:13.263646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263653] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263658] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.926 [2024-07-24 23:18:13.263672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.926 [2024-07-24 23:18:13.263676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.926 [2024-07-24 23:18:13.263690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263724] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263736] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:50.926 [2024-07-24 23:18:13.263741] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:50.926 [2024-07-24 23:18:13.263747] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:50.926 [2024-07-24 23:18:13.263764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.926 [2024-07-24 23:18:13.263777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.926 [2024-07-24 23:18:13.263785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263792] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5662c0) 00:17:50.926 [2024-07-24 23:18:13.263799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:50.926 [2024-07-24 23:18:13.263825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.926 [2024-07-24 23:18:13.263833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a80c0, cid 5, qid 0 00:17:50.926 [2024-07-24 23:18:13.263893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.926 [2024-07-24 23:18:13.263900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.926 [2024-07-24 23:18:13.263904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.926 [2024-07-24 23:18:13.263916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.926 [2024-07-24 23:18:13.263922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.926 [2024-07-24 23:18:13.263926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a80c0) on tqpair=0x5662c0 00:17:50.926 [2024-07-24 23:18:13.263941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.263945] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5662c0) 00:17:50.926 [2024-07-24 23:18:13.263953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.926 [2024-07-24 23:18:13.263971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a80c0, cid 5, qid 0 00:17:50.926 [2024-07-24 23:18:13.264023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.926 [2024-07-24 23:18:13.264041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.926 [2024-07-24 23:18:13.264045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.264050] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a80c0) on tqpair=0x5662c0 00:17:50.926 [2024-07-24 23:18:13.264062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.926 [2024-07-24 23:18:13.264067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5662c0) 00:17:50.926 [2024-07-24 23:18:13.264074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.926 [2024-07-24 23:18:13.264094] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a80c0, cid 5, qid 0 00:17:50.926 [2024-07-24 23:18:13.264158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.926 [2024-07-24 23:18:13.264166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.926 [2024-07-24 23:18:13.264170] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a80c0) on tqpair=0x5662c0 00:17:50.927 [2024-07-24 23:18:13.264186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5662c0) 00:17:50.927 [2024-07-24 23:18:13.264198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.927 [2024-07-24 23:18:13.264218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a80c0, cid 5, qid 0 00:17:50.927 [2024-07-24 23:18:13.264261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.927 [2024-07-24 23:18:13.264268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.927 [2024-07-24 23:18:13.264272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a80c0) on tqpair=0x5662c0 00:17:50.927 [2024-07-24 23:18:13.264297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5662c0) 00:17:50.927 [2024-07-24 23:18:13.264310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.927 [2024-07-24 23:18:13.264318] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5662c0) 00:17:50.927 [2024-07-24 23:18:13.264329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.927 [2024-07-24 23:18:13.264337] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264341] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x5662c0) 00:17:50.927 [2024-07-24 23:18:13.264347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.927 [2024-07-24 23:18:13.264359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5662c0) 00:17:50.927 [2024-07-24 23:18:13.264370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.927 [2024-07-24 23:18:13.264391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a80c0, cid 5, qid 0 00:17:50.927 [2024-07-24 23:18:13.264398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7f40, cid 4, qid 0 00:17:50.927 [2024-07-24 23:18:13.264403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a8240, cid 6, qid 0 00:17:50.927 [2024-07-24 23:18:13.264408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a83c0, cid 7, qid 0 00:17:50.927 [2024-07-24 23:18:13.264560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.927 [2024-07-24 23:18:13.264567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.927 [2024-07-24 23:18:13.264571] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264575] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=8192, cccid=5 00:17:50.927 [2024-07-24 23:18:13.264580] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a80c0) on tqpair(0x5662c0): expected_datao=0, payload_size=8192 00:17:50.927 [2024-07-24 23:18:13.264585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264601] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264606] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.927 [2024-07-24 23:18:13.264618] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.927 [2024-07-24 23:18:13.264622] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264626] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=512, cccid=4 00:17:50.927 [2024-07-24 23:18:13.264631] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a7f40) on tqpair(0x5662c0): expected_datao=0, payload_size=512 00:17:50.927 [2024-07-24 23:18:13.264635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264642] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264646] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.927 [2024-07-24 23:18:13.264657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.927 [2024-07-24 23:18:13.264661] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264665] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=512, cccid=6 00:17:50.927 [2024-07-24 23:18:13.264669] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a8240) on tqpair(0x5662c0): expected_datao=0, payload_size=512 00:17:50.927 [2024-07-24 23:18:13.264674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264680] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264684] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264690] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:50.927 [2024-07-24 23:18:13.264696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:50.927 [2024-07-24 23:18:13.264699] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264703] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5662c0): datao=0, datal=4096, cccid=7 00:17:50.927 [2024-07-24 23:18:13.264708] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5a83c0) on tqpair(0x5662c0): expected_datao=0, payload_size=4096 00:17:50.927 [2024-07-24 23:18:13.264712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264719] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264723] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.927 [2024-07-24 23:18:13.264738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.927 [2024-07-24 23:18:13.264742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.927 [2024-07-24 23:18:13.264746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a80c0) on tqpair=0x5662c0 00:17:50.927 [2024-07-24 23:18:13.264766] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.927 [2024-07-24 23:18:13.264772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.927 ===================================================== 00:17:50.927 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.927 ===================================================== 00:17:50.927 Controller Capabilities/Features 00:17:50.927 ================================ 00:17:50.927 Vendor ID: 8086 00:17:50.927 Subsystem Vendor ID: 8086 00:17:50.927 Serial Number: SPDK00000000000001 00:17:50.927 Model Number: SPDK bdev Controller 00:17:50.927 Firmware Version: 24.09 00:17:50.927 Recommended Arb Burst: 6 00:17:50.927 IEEE OUI Identifier: e4 d2 5c 00:17:50.927 Multi-path I/O 00:17:50.927 May have multiple subsystem ports: Yes 00:17:50.927 May have multiple controllers: Yes 00:17:50.927 Associated with SR-IOV VF: No 00:17:50.927 Max Data Transfer Size: 131072 00:17:50.927 Max Number of Namespaces: 32 00:17:50.927 Max Number of I/O Queues: 127 00:17:50.927 NVMe Specification Version (VS): 1.3 00:17:50.928 NVMe Specification Version (Identify): 1.3 00:17:50.928 Maximum Queue Entries: 128 00:17:50.928 Contiguous Queues Required: Yes 00:17:50.928 Arbitration Mechanisms Supported 00:17:50.928 Weighted Round Robin: Not Supported 00:17:50.928 Vendor Specific: Not Supported 00:17:50.928 Reset Timeout: 15000 ms 00:17:50.928 Doorbell Stride: 4 bytes 00:17:50.928 NVM Subsystem Reset: Not Supported 00:17:50.928 Command Sets Supported 00:17:50.928 NVM Command Set: Supported 00:17:50.928 Boot Partition: Not Supported 00:17:50.928 Memory Page Size Minimum: 4096 bytes 00:17:50.928 Memory Page Size Maximum: 4096 bytes 00:17:50.928 Persistent Memory Region: Not Supported 00:17:50.928 Optional Asynchronous Events Supported 00:17:50.928 Namespace Attribute Notices: Supported 00:17:50.928 Firmware Activation Notices: Not Supported 00:17:50.928 ANA Change Notices: Not Supported 00:17:50.928 PLE Aggregate Log Change Notices: Not Supported 00:17:50.928 LBA Status Info Alert Notices: Not Supported 00:17:50.928 EGE Aggregate Log Change Notices: Not Supported 00:17:50.928 Normal NVM Subsystem Shutdown event: Not Supported 00:17:50.928 Zone Descriptor Change Notices: Not Supported 00:17:50.928 Discovery Log Change Notices: Not Supported 00:17:50.928 Controller Attributes 00:17:50.928 128-bit Host Identifier: Supported 00:17:50.928 Non-Operational Permissive Mode: Not Supported 00:17:50.928 NVM Sets: Not Supported 00:17:50.928 Read Recovery Levels: Not Supported 00:17:50.928 Endurance Groups: Not Supported 00:17:50.928 Predictable Latency Mode: Not Supported 00:17:50.928 Traffic Based Keep ALive: Not Supported 00:17:50.928 Namespace Granularity: Not Supported 00:17:50.928 SQ Associations: Not Supported 00:17:50.928 UUID List: Not Supported 00:17:50.928 Multi-Domain Subsystem: Not Supported 00:17:50.928 Fixed Capacity Management: Not Supported 00:17:50.928 Variable Capacity Management: Not Supported 00:17:50.928 Delete Endurance Group: Not Supported 00:17:50.928 Delete NVM Set: Not Supported 00:17:50.928 Extended LBA Formats Supported: Not Supported 00:17:50.928 Flexible Data Placement Supported: Not Supported 00:17:50.928 00:17:50.928 Controller Memory Buffer Support 00:17:50.928 ================================ 00:17:50.928 Supported: No 00:17:50.928 00:17:50.928 Persistent Memory Region Support 00:17:50.928 ================================ 00:17:50.928 Supported: No 00:17:50.928 00:17:50.928 Admin Command Set Attributes 00:17:50.928 ============================ 00:17:50.928 Security Send/Receive: Not Supported 00:17:50.928 Format NVM: Not Supported 00:17:50.928 Firmware Activate/Download: Not Supported 00:17:50.928 Namespace Management: Not Supported 00:17:50.928 Device Self-Test: Not Supported 00:17:50.928 Directives: Not Supported 00:17:50.928 NVMe-MI: Not Supported 00:17:50.928 Virtualization Management: Not Supported 00:17:50.928 Doorbell Buffer Config: Not Supported 00:17:50.928 Get LBA Status Capability: Not Supported 00:17:50.928 Command & Feature Lockdown Capability: Not Supported 00:17:50.928 Abort Command Limit: 4 00:17:50.928 Async Event Request Limit: 4 00:17:50.928 Number of Firmware Slots: N/A 00:17:50.928 Firmware Slot 1 Read-Only: N/A 00:17:50.928 Firmware Activation Without Reset: N/A 00:17:50.928 Multiple Update Detection Support: N/A 00:17:50.928 Firmware Update Granularity: No Information Provided 00:17:50.928 Per-Namespace SMART Log: No 00:17:50.928 Asymmetric Namespace Access Log Page: Not Supported 00:17:50.928 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:50.928 Command Effects Log Page: Supported 00:17:50.928 Get Log Page Extended Data: Supported 00:17:50.928 Telemetry Log Pages: Not Supported 00:17:50.928 Persistent Event Log Pages: Not Supported 00:17:50.928 Supported Log Pages Log Page: May Support 00:17:50.928 Commands Supported & Effects Log Page: Not Supported 00:17:50.928 Feature Identifiers & Effects Log Page:May Support 00:17:50.928 NVMe-MI Commands & Effects Log Page: May Support 00:17:50.928 Data Area 4 for Telemetry Log: Not Supported 00:17:50.928 Error Log Page Entries Supported: 128 00:17:50.928 Keep Alive: Supported 00:17:50.928 Keep Alive Granularity: 10000 ms 00:17:50.928 00:17:50.928 NVM Command Set Attributes 00:17:50.928 ========================== 00:17:50.928 Submission Queue Entry Size 00:17:50.928 Max: 64 00:17:50.928 Min: 64 00:17:50.928 Completion Queue Entry Size 00:17:50.928 Max: 16 00:17:50.928 Min: 16 00:17:50.928 Number of Namespaces: 32 00:17:50.928 Compare Command: Supported 00:17:50.928 Write Uncorrectable Command: Not Supported 00:17:50.928 Dataset Management Command: Supported 00:17:50.928 Write Zeroes Command: Supported 00:17:50.928 Set Features Save Field: Not Supported 00:17:50.928 Reservations: Supported 00:17:50.928 Timestamp: Not Supported 00:17:50.928 Copy: Supported 00:17:50.928 Volatile Write Cache: Present 00:17:50.928 Atomic Write Unit (Normal): 1 00:17:50.928 Atomic Write Unit (PFail): 1 00:17:50.928 Atomic Compare & Write Unit: 1 00:17:50.928 Fused Compare & Write: Supported 00:17:50.928 Scatter-Gather List 00:17:50.928 SGL Command Set: Supported 00:17:50.928 SGL Keyed: Supported 00:17:50.928 SGL Bit Bucket Descriptor: Not Supported 00:17:50.928 SGL Metadata Pointer: Not Supported 00:17:50.928 Oversized SGL: Not Supported 00:17:50.928 SGL Metadata Address: Not Supported 00:17:50.928 SGL Offset: Supported 00:17:50.928 Transport SGL Data Block: Not Supported 00:17:50.928 Replay Protected Memory Block: Not Supported 00:17:50.928 00:17:50.928 Firmware Slot Information 00:17:50.928 ========================= 00:17:50.928 Active slot: 1 00:17:50.928 Slot 1 Firmware Revision: 24.09 00:17:50.928 00:17:50.928 00:17:50.928 Commands Supported and Effects 00:17:50.928 ============================== 00:17:50.928 Admin Commands 00:17:50.928 -------------- 00:17:50.929 Get Log Page (02h): Supported 00:17:50.929 Identify (06h): Supported 00:17:50.929 Abort (08h): Supported 00:17:50.929 Set Features (09h): Supported 00:17:50.929 Get Features (0Ah): Supported 00:17:50.929 Asynchronous Event Request (0Ch): Supported 00:17:50.929 Keep Alive (18h): Supported 00:17:50.929 I/O Commands 00:17:50.929 ------------ 00:17:50.929 Flush (00h): Supported LBA-Change 00:17:50.929 Write (01h): Supported LBA-Change 00:17:50.929 Read (02h): Supported 00:17:50.929 Compare (05h): Supported 00:17:50.929 Write Zeroes (08h): Supported LBA-Change 00:17:50.929 Dataset Management (09h): Supported LBA-Change 00:17:50.929 Copy (19h): Supported LBA-Change 00:17:50.929 00:17:50.929 Error Log 00:17:50.929 ========= 00:17:50.929 00:17:50.929 Arbitration 00:17:50.929 =========== 00:17:50.929 Arbitration Burst: 1 00:17:50.929 00:17:50.929 Power Management 00:17:50.929 ================ 00:17:50.929 Number of Power States: 1 00:17:50.929 Current Power State: Power State #0 00:17:50.929 Power State #0: 00:17:50.929 Max Power: 0.00 W 00:17:50.929 Non-Operational State: Operational 00:17:50.929 Entry Latency: Not Reported 00:17:50.929 Exit Latency: Not Reported 00:17:50.929 Relative Read Throughput: 0 00:17:50.929 Relative Read Latency: 0 00:17:50.929 Relative Write Throughput: 0 00:17:50.929 Relative Write Latency: 0 00:17:50.929 Idle Power: Not Reported 00:17:50.929 Active Power: Not Reported 00:17:50.929 Non-Operational Permissive Mode: Not Supported 00:17:50.929 00:17:50.929 Health Information 00:17:50.929 ================== 00:17:50.929 Critical Warnings: 00:17:50.929 Available Spare Space: OK 00:17:50.929 Temperature: OK 00:17:50.929 Device Reliability: OK 00:17:50.929 Read Only: No 00:17:50.929 Volatile Memory Backup: OK 00:17:50.929 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:50.929 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:50.929 Available Spare: 0% 00:17:50.929 Available Spare Threshold: 0% 00:17:50.929 Life Percentage Used:[2024-07-24 23:18:13.264776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.929 [2024-07-24 23:18:13.264780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7f40) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.264794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.929 [2024-07-24 23:18:13.264800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.929 [2024-07-24 23:18:13.264804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.929 [2024-07-24 23:18:13.264808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a8240) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.264816] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.929 [2024-07-24 23:18:13.264822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.929 [2024-07-24 23:18:13.264826] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.929 [2024-07-24 23:18:13.264830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a83c0) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.264945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.929 [2024-07-24 23:18:13.264952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x5662c0) 00:17:50.929 [2024-07-24 23:18:13.264960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.929 [2024-07-24 23:18:13.264984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a83c0, cid 7, qid 0 00:17:50.929 [2024-07-24 23:18:13.265033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.929 [2024-07-24 23:18:13.265040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.929 [2024-07-24 23:18:13.265044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.929 [2024-07-24 23:18:13.265049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a83c0) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.265089] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:50.929 [2024-07-24 23:18:13.265102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7940) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.265109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.929 [2024-07-24 23:18:13.265115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7ac0) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.265120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.929 [2024-07-24 23:18:13.265126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7c40) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.929 [2024-07-24 23:18:13.269176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.929 [2024-07-24 23:18:13.269181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:50.929 [2024-07-24 23:18:13.269192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269285] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.269293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.269297] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.269309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.269436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.269439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.269449] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:50.930 [2024-07-24 23:18:13.269455] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:50.930 [2024-07-24 23:18:13.269465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.269561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.269565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.269581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.269675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.269679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.269694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.269784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.269788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.269803] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.269888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.269892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.269907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.269916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.269923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.269941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.269993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.270000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.270004] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.270008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.270019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.270024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.270028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.930 [2024-07-24 23:18:13.270035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.930 [2024-07-24 23:18:13.270053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.930 [2024-07-24 23:18:13.270105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.930 [2024-07-24 23:18:13.270112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.930 [2024-07-24 23:18:13.270116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.930 [2024-07-24 23:18:13.270120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.930 [2024-07-24 23:18:13.270131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270181] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270237] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270261] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270453] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270499] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270603] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270656] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270660] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270868] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.270899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.270918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.270960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.931 [2024-07-24 23:18:13.270967] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.931 [2024-07-24 23:18:13.270971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.931 [2024-07-24 23:18:13.270986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.931 [2024-07-24 23:18:13.270995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.931 [2024-07-24 23:18:13.271002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.931 [2024-07-24 23:18:13.271021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.931 [2024-07-24 23:18:13.271063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271625] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.932 [2024-07-24 23:18:13.271867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.932 [2024-07-24 23:18:13.271886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.932 [2024-07-24 23:18:13.271931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.932 [2024-07-24 23:18:13.271938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.932 [2024-07-24 23:18:13.271942] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271946] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.932 [2024-07-24 23:18:13.271964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.932 [2024-07-24 23:18:13.271968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.271972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.271980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.271998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272082] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272086] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272289] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272546] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272854] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.933 [2024-07-24 23:18:13.272865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.933 [2024-07-24 23:18:13.272873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.933 [2024-07-24 23:18:13.272881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.933 [2024-07-24 23:18:13.272899] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.933 [2024-07-24 23:18:13.272948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.933 [2024-07-24 23:18:13.272955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.933 [2024-07-24 23:18:13.272959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.272963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.934 [2024-07-24 23:18:13.272974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.272979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.272983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.934 [2024-07-24 23:18:13.272990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.934 [2024-07-24 23:18:13.273009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.934 [2024-07-24 23:18:13.273056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.934 [2024-07-24 23:18:13.273063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.934 [2024-07-24 23:18:13.273067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.273072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.934 [2024-07-24 23:18:13.273083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.273087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.273091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.934 [2024-07-24 23:18:13.273099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.934 [2024-07-24 23:18:13.273117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.934 [2024-07-24 23:18:13.277149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.934 [2024-07-24 23:18:13.277169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.934 [2024-07-24 23:18:13.277174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.277179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.934 [2024-07-24 23:18:13.277192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.277197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.277201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5662c0) 00:17:50.934 [2024-07-24 23:18:13.277210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:50.934 [2024-07-24 23:18:13.277235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5a7dc0, cid 3, qid 0 00:17:50.934 [2024-07-24 23:18:13.277284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:50.934 [2024-07-24 23:18:13.277292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:50.934 [2024-07-24 23:18:13.277295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:50.934 [2024-07-24 23:18:13.277300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5a7dc0) on tqpair=0x5662c0 00:17:50.934 [2024-07-24 23:18:13.277309] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:50.934 0% 00:17:50.934 Data Units Read: 0 00:17:50.934 Data Units Written: 0 00:17:50.934 Host Read Commands: 0 00:17:50.934 Host Write Commands: 0 00:17:50.934 Controller Busy Time: 0 minutes 00:17:50.934 Power Cycles: 0 00:17:50.934 Power On Hours: 0 hours 00:17:50.934 Unsafe Shutdowns: 0 00:17:50.934 Unrecoverable Media Errors: 0 00:17:50.934 Lifetime Error Log Entries: 0 00:17:50.934 Warning Temperature Time: 0 minutes 00:17:50.934 Critical Temperature Time: 0 minutes 00:17:50.934 00:17:50.934 Number of Queues 00:17:50.934 ================ 00:17:50.934 Number of I/O Submission Queues: 127 00:17:50.934 Number of I/O Completion Queues: 127 00:17:50.934 00:17:50.934 Active Namespaces 00:17:50.934 ================= 00:17:50.934 Namespace ID:1 00:17:50.934 Error Recovery Timeout: Unlimited 00:17:50.934 Command Set Identifier: NVM (00h) 00:17:50.934 Deallocate: Supported 00:17:50.934 Deallocated/Unwritten Error: Not Supported 00:17:50.934 Deallocated Read Value: Unknown 00:17:50.934 Deallocate in Write Zeroes: Not Supported 00:17:50.934 Deallocated Guard Field: 0xFFFF 00:17:50.934 Flush: Supported 00:17:50.934 Reservation: Supported 00:17:50.934 Namespace Sharing Capabilities: Multiple Controllers 00:17:50.934 Size (in LBAs): 131072 (0GiB) 00:17:50.934 Capacity (in LBAs): 131072 (0GiB) 00:17:50.934 Utilization (in LBAs): 131072 (0GiB) 00:17:50.934 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:50.934 EUI64: ABCDEF0123456789 00:17:50.934 UUID: dbd904a1-7734-42f7-bc71-88ff0d30c4ef 00:17:50.934 Thin Provisioning: Not Supported 00:17:50.934 Per-NS Atomic Units: Yes 00:17:50.934 Atomic Boundary Size (Normal): 0 00:17:50.934 Atomic Boundary Size (PFail): 0 00:17:50.934 Atomic Boundary Offset: 0 00:17:50.934 Maximum Single Source Range Length: 65535 00:17:50.934 Maximum Copy Length: 65535 00:17:50.934 Maximum Source Range Count: 1 00:17:50.934 NGUID/EUI64 Never Reused: No 00:17:50.934 Namespace Write Protected: No 00:17:50.934 Number of LBA Formats: 1 00:17:50.934 Current LBA Format: LBA Format #00 00:17:50.934 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:50.934 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.934 rmmod nvme_tcp 00:17:50.934 rmmod nvme_fabrics 00:17:50.934 rmmod nvme_keyring 00:17:50.934 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 75099 ']' 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 75099 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 75099 ']' 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 75099 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75099 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:51.193 killing process with pid 75099 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75099' 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 75099 00:17:51.193 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 75099 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:51.452 00:17:51.452 real 0m2.713s 00:17:51.452 user 0m7.374s 00:17:51.452 sys 0m0.742s 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.452 23:18:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.452 ************************************ 00:17:51.452 END TEST nvmf_identify 00:17:51.452 ************************************ 00:17:51.452 23:18:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:51.452 23:18:13 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:51.452 23:18:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.452 23:18:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.452 23:18:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.452 ************************************ 00:17:51.452 START TEST nvmf_perf 00:17:51.452 ************************************ 00:17:51.452 23:18:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:51.710 * Looking for test storage... 00:17:51.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.710 23:18:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:51.710 Cannot find device "nvmf_tgt_br" 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.710 Cannot find device "nvmf_tgt_br2" 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:51.710 Cannot find device "nvmf_tgt_br" 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:51.710 Cannot find device "nvmf_tgt_br2" 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.710 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.968 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.968 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.968 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.968 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:51.968 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:51.968 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:51.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:17:51.969 00:17:51.969 --- 10.0.0.2 ping statistics --- 00:17:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.969 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:51.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:51.969 00:17:51.969 --- 10.0.0.3 ping statistics --- 00:17:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.969 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:51.969 00:17:51.969 --- 10.0.0.1 ping statistics --- 00:17:51.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.969 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75301 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75301 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75301 ']' 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.969 23:18:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:51.969 [2024-07-24 23:18:14.435682] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:17:51.969 [2024-07-24 23:18:14.435801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.227 [2024-07-24 23:18:14.580439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:52.485 [2024-07-24 23:18:14.723676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.485 [2024-07-24 23:18:14.723746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.485 [2024-07-24 23:18:14.723758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.485 [2024-07-24 23:18:14.723767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.485 [2024-07-24 23:18:14.723774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.486 [2024-07-24 23:18:14.724175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.486 [2024-07-24 23:18:14.724249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.486 [2024-07-24 23:18:14.724389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:52.486 [2024-07-24 23:18:14.724391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.486 [2024-07-24 23:18:14.797618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:53.052 23:18:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:53.618 23:18:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:53.618 23:18:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:53.618 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:53.618 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:54.185 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:54.185 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:54.185 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:54.185 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:54.185 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.185 [2024-07-24 23:18:16.581385] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.185 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.443 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:54.443 23:18:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.701 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:54.701 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:54.959 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.248 [2024-07-24 23:18:17.475101] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.248 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:55.248 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:55.248 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:55.248 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:55.248 23:18:17 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:56.621 Initializing NVMe Controllers 00:17:56.621 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:56.621 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:56.621 Initialization complete. Launching workers. 00:17:56.621 ======================================================== 00:17:56.621 Latency(us) 00:17:56.621 Device Information : IOPS MiB/s Average min max 00:17:56.621 PCIE (0000:00:10.0) NSID 1 from core 0: 22528.00 88.00 1419.84 398.19 8116.98 00:17:56.621 ======================================================== 00:17:56.621 Total : 22528.00 88.00 1419.84 398.19 8116.98 00:17:56.621 00:17:56.621 23:18:18 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:57.992 Initializing NVMe Controllers 00:17:57.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:57.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:57.992 Initialization complete. Launching workers. 00:17:57.992 ======================================================== 00:17:57.992 Latency(us) 00:17:57.992 Device Information : IOPS MiB/s Average min max 00:17:57.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3612.98 14.11 276.47 104.66 5110.63 00:17:57.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.74 4062.87 12283.87 00:17:57.992 ======================================================== 00:17:57.992 Total : 3736.98 14.60 536.49 104.66 12283.87 00:17:57.992 00:17:57.992 23:18:20 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:59.366 Initializing NVMe Controllers 00:17:59.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:59.366 Initialization complete. Launching workers. 00:17:59.366 ======================================================== 00:17:59.366 Latency(us) 00:17:59.366 Device Information : IOPS MiB/s Average min max 00:17:59.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8666.32 33.85 3695.67 571.94 8601.32 00:17:59.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3999.22 15.62 8052.87 6779.77 12561.98 00:17:59.366 ======================================================== 00:17:59.366 Total : 12665.54 49.47 5071.48 571.94 12561.98 00:17:59.366 00:17:59.366 23:18:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:59.366 23:18:21 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:01.933 Initializing NVMe Controllers 00:18:01.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.933 Controller IO queue size 128, less than required. 00:18:01.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.933 Controller IO queue size 128, less than required. 00:18:01.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:01.933 Initialization complete. Launching workers. 00:18:01.933 ======================================================== 00:18:01.933 Latency(us) 00:18:01.933 Device Information : IOPS MiB/s Average min max 00:18:01.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1394.26 348.57 93577.35 48027.84 172372.72 00:18:01.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.90 154.22 211149.03 81487.27 335340.21 00:18:01.933 ======================================================== 00:18:01.933 Total : 2011.16 502.79 129640.85 48027.84 335340.21 00:18:01.933 00:18:01.933 23:18:24 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:01.933 Initializing NVMe Controllers 00:18:01.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.933 Controller IO queue size 128, less than required. 00:18:01.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.933 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:01.933 Controller IO queue size 128, less than required. 00:18:01.933 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:01.933 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:01.933 WARNING: Some requested NVMe devices were skipped 00:18:01.933 No valid NVMe controllers or AIO or URING devices found 00:18:01.933 23:18:24 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:04.465 Initializing NVMe Controllers 00:18:04.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.465 Controller IO queue size 128, less than required. 00:18:04.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.465 Controller IO queue size 128, less than required. 00:18:04.465 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.465 Initialization complete. Launching workers. 00:18:04.465 00:18:04.465 ==================== 00:18:04.465 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:04.465 TCP transport: 00:18:04.465 polls: 9649 00:18:04.465 idle_polls: 5736 00:18:04.465 sock_completions: 3913 00:18:04.465 nvme_completions: 5771 00:18:04.465 submitted_requests: 8544 00:18:04.465 queued_requests: 1 00:18:04.465 00:18:04.465 ==================== 00:18:04.465 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:04.465 TCP transport: 00:18:04.465 polls: 12025 00:18:04.465 idle_polls: 7891 00:18:04.465 sock_completions: 4134 00:18:04.465 nvme_completions: 6113 00:18:04.465 submitted_requests: 9110 00:18:04.465 queued_requests: 1 00:18:04.465 ======================================================== 00:18:04.465 Latency(us) 00:18:04.465 Device Information : IOPS MiB/s Average min max 00:18:04.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1442.32 360.58 90580.80 49334.39 154092.95 00:18:04.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1527.81 381.95 83966.37 47696.48 123549.26 00:18:04.465 ======================================================== 00:18:04.465 Total : 2970.13 742.53 87178.39 47696.48 154092.95 00:18:04.465 00:18:04.465 23:18:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:04.465 23:18:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.723 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.723 rmmod nvme_tcp 00:18:04.723 rmmod nvme_fabrics 00:18:04.723 rmmod nvme_keyring 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75301 ']' 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75301 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75301 ']' 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75301 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75301 00:18:04.982 killing process with pid 75301 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75301' 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75301 00:18:04.982 23:18:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75301 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:05.921 00:18:05.921 real 0m14.266s 00:18:05.921 user 0m52.061s 00:18:05.921 sys 0m4.139s 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.921 ************************************ 00:18:05.921 END TEST nvmf_perf 00:18:05.921 ************************************ 00:18:05.921 23:18:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:05.921 23:18:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:05.921 23:18:28 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:05.921 23:18:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:05.921 23:18:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.921 23:18:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.921 ************************************ 00:18:05.921 START TEST nvmf_fio_host 00:18:05.921 ************************************ 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:05.921 * Looking for test storage... 00:18:05.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.921 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:05.922 Cannot find device "nvmf_tgt_br" 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:05.922 Cannot find device "nvmf_tgt_br2" 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:05.922 Cannot find device "nvmf_tgt_br" 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:05.922 Cannot find device "nvmf_tgt_br2" 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:18:05.922 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:06.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:06.180 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:06.180 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:06.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:18:06.181 00:18:06.181 --- 10.0.0.2 ping statistics --- 00:18:06.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.181 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:06.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:06.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:18:06.181 00:18:06.181 --- 10.0.0.3 ping statistics --- 00:18:06.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.181 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:06.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:06.181 00:18:06.181 --- 10.0.0.1 ping statistics --- 00:18:06.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.181 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75706 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75706 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75706 ']' 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.181 23:18:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.439 [2024-07-24 23:18:28.716746] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:06.439 [2024-07-24 23:18:28.716855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.439 [2024-07-24 23:18:28.859728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.698 [2024-07-24 23:18:28.995288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.698 [2024-07-24 23:18:28.995359] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.698 [2024-07-24 23:18:28.995371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.698 [2024-07-24 23:18:28.995380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.698 [2024-07-24 23:18:28.995388] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.698 [2024-07-24 23:18:28.996244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.698 [2024-07-24 23:18:28.996320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.698 [2024-07-24 23:18:28.996457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.698 [2024-07-24 23:18:28.996460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.698 [2024-07-24 23:18:29.072409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.264 23:18:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.264 23:18:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:18:07.264 23:18:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:07.522 [2024-07-24 23:18:29.912643] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.522 23:18:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:07.522 23:18:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.522 23:18:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.522 23:18:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:07.780 Malloc1 00:18:07.780 23:18:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:08.346 23:18:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:08.604 23:18:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.604 [2024-07-24 23:18:31.068636] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.604 23:18:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:08.863 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:09.121 23:18:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:09.121 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:09.121 fio-3.35 00:18:09.121 Starting 1 thread 00:18:11.669 00:18:11.669 test: (groupid=0, jobs=1): err= 0: pid=75789: Wed Jul 24 23:18:33 2024 00:18:11.669 read: IOPS=9010, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 00:18:11.669 slat (nsec): min=1996, max=350638, avg=2586.56, stdev=3498.67 00:18:11.669 clat (usec): min=2756, max=13908, avg=7384.82, stdev=537.48 00:18:11.669 lat (usec): min=2792, max=13910, avg=7387.41, stdev=537.30 00:18:11.669 clat percentiles (usec): 00:18:11.669 | 1.00th=[ 6194], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 6980], 00:18:11.669 | 30.00th=[ 7177], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:18:11.669 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:18:11.669 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[11731], 99.95th=[12911], 00:18:11.669 | 99.99th=[13829] 00:18:11.669 bw ( KiB/s): min=35544, max=36592, per=99.94%, avg=36022.00, stdev=497.48, samples=4 00:18:11.669 iops : min= 8886, max= 9148, avg=9005.50, stdev=124.37, samples=4 00:18:11.669 write: IOPS=9027, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec); 0 zone resets 00:18:11.669 slat (usec): min=2, max=312, avg= 2.67, stdev= 2.66 00:18:11.669 clat (usec): min=2585, max=13146, avg=6756.94, stdev=487.44 00:18:11.669 lat (usec): min=2599, max=13149, avg=6759.61, stdev=487.39 00:18:11.669 clat percentiles (usec): 00:18:11.669 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6390], 00:18:11.669 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:18:11.669 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:18:11.669 | 99.00th=[ 7832], 99.50th=[ 8291], 99.90th=[10683], 99.95th=[11994], 00:18:11.669 | 99.99th=[13173] 00:18:11.669 bw ( KiB/s): min=35960, max=36288, per=100.00%, avg=36130.00, stdev=138.00, samples=4 00:18:11.669 iops : min= 8990, max= 9072, avg=9032.50, stdev=34.50, samples=4 00:18:11.669 lat (msec) : 4=0.07%, 10=99.74%, 20=0.19% 00:18:11.669 cpu : usr=64.81%, sys=25.82%, ctx=44, majf=0, minf=7 00:18:11.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:11.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.669 issued rwts: total=18085,18118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.669 00:18:11.669 Run status group 0 (all jobs): 00:18:11.669 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.1MB), run=2007-2007msec 00:18:11.669 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2007-2007msec 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:11.669 23:18:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:11.669 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:11.669 fio-3.35 00:18:11.669 Starting 1 thread 00:18:14.200 00:18:14.200 test: (groupid=0, jobs=1): err= 0: pid=75838: Wed Jul 24 23:18:36 2024 00:18:14.200 read: IOPS=8267, BW=129MiB/s (135MB/s)(259MiB/2006msec) 00:18:14.200 slat (usec): min=3, max=116, avg= 3.63, stdev= 2.01 00:18:14.200 clat (usec): min=2745, max=18583, avg=8719.12, stdev=2666.69 00:18:14.200 lat (usec): min=2748, max=18586, avg=8722.75, stdev=2666.74 00:18:14.200 clat percentiles (usec): 00:18:14.200 | 1.00th=[ 4146], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6325], 00:18:14.200 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9241], 00:18:14.200 | 70.00th=[10028], 80.00th=[10814], 90.00th=[11994], 95.00th=[13566], 00:18:14.200 | 99.00th=[16712], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:18:14.200 | 99.99th=[18482] 00:18:14.200 bw ( KiB/s): min=59040, max=74080, per=50.50%, avg=66795.00, stdev=8374.01, samples=4 00:18:14.200 iops : min= 3690, max= 4630, avg=4174.50, stdev=523.16, samples=4 00:18:14.200 write: IOPS=4837, BW=75.6MiB/s (79.3MB/s)(137MiB/1808msec); 0 zone resets 00:18:14.200 slat (usec): min=33, max=363, avg=38.07, stdev= 7.91 00:18:14.200 clat (usec): min=4278, max=23059, avg=11918.88, stdev=2118.56 00:18:14.200 lat (usec): min=4314, max=23095, avg=11956.95, stdev=2118.27 00:18:14.200 clat percentiles (usec): 00:18:14.200 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10159], 00:18:14.200 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:18:14.200 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14615], 95.00th=[15664], 00:18:14.200 | 99.00th=[17957], 99.50th=[19006], 99.90th=[21890], 99.95th=[22676], 00:18:14.200 | 99.99th=[22938] 00:18:14.200 bw ( KiB/s): min=60608, max=77636, per=89.75%, avg=69473.00, stdev=9163.92, samples=4 00:18:14.200 iops : min= 3788, max= 4852, avg=4342.00, stdev=572.67, samples=4 00:18:14.200 lat (msec) : 4=0.46%, 10=50.26%, 20=49.17%, 50=0.10% 00:18:14.200 cpu : usr=81.60%, sys=14.56%, ctx=3, majf=0, minf=12 00:18:14.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:14.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:14.200 issued rwts: total=16584,8747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:14.200 00:18:14.200 Run status group 0 (all jobs): 00:18:14.200 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2006-2006msec 00:18:14.200 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=137MiB (143MB), run=1808-1808msec 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.200 rmmod nvme_tcp 00:18:14.200 rmmod nvme_fabrics 00:18:14.200 rmmod nvme_keyring 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75706 ']' 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75706 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75706 ']' 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75706 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75706 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:14.200 killing process with pid 75706 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75706' 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75706 00:18:14.200 23:18:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75706 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:14.767 00:18:14.767 real 0m8.865s 00:18:14.767 user 0m35.855s 00:18:14.767 sys 0m2.513s 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:14.767 23:18:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.767 ************************************ 00:18:14.767 END TEST nvmf_fio_host 00:18:14.767 ************************************ 00:18:14.767 23:18:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:14.767 23:18:37 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:14.767 23:18:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:14.767 23:18:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.767 23:18:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.767 ************************************ 00:18:14.767 START TEST nvmf_failover 00:18:14.767 ************************************ 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:14.767 * Looking for test storage... 00:18:14.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:14.767 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.768 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:15.027 Cannot find device "nvmf_tgt_br" 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.027 Cannot find device "nvmf_tgt_br2" 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:15.027 Cannot find device "nvmf_tgt_br" 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:15.027 Cannot find device "nvmf_tgt_br2" 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.027 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.027 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:15.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:18:15.297 00:18:15.297 --- 10.0.0.2 ping statistics --- 00:18:15.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.297 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:15.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:15.297 00:18:15.297 --- 10.0.0.3 ping statistics --- 00:18:15.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.297 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:15.297 00:18:15.297 --- 10.0.0.1 ping statistics --- 00:18:15.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.297 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=76052 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 76052 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76052 ']' 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.297 23:18:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:15.297 [2024-07-24 23:18:37.619983] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:15.297 [2024-07-24 23:18:37.620095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.297 [2024-07-24 23:18:37.762210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:15.566 [2024-07-24 23:18:37.906852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.566 [2024-07-24 23:18:37.906932] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.566 [2024-07-24 23:18:37.906948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.566 [2024-07-24 23:18:37.906959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.566 [2024-07-24 23:18:37.906968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.566 [2024-07-24 23:18:37.908337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.566 [2024-07-24 23:18:37.908484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.566 [2024-07-24 23:18:37.908493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.566 [2024-07-24 23:18:37.988884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.500 23:18:38 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:16.500 [2024-07-24 23:18:38.975007] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.759 23:18:39 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:17.017 Malloc0 00:18:17.017 23:18:39 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.275 23:18:39 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.533 23:18:39 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.533 [2024-07-24 23:18:39.977183] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.533 23:18:39 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:17.791 [2024-07-24 23:18:40.201332] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:17.792 23:18:40 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:18.050 [2024-07-24 23:18:40.421512] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=76110 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 76110 /var/tmp/bdevperf.sock 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76110 ']' 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.050 23:18:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:19.426 23:18:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.426 23:18:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:19.426 23:18:41 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.426 NVMe0n1 00:18:19.426 23:18:41 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:19.684 00:18:19.684 23:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=76138 00:18:19.684 23:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:19.684 23:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.661 23:18:43 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.919 [2024-07-24 23:18:43.361483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.919 [2024-07-24 23:18:43.361667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.361992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.920 [2024-07-24 23:18:43.362481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 [2024-07-24 23:18:43.362614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e8a60 is same with the state(5) to be set 00:18:20.921 23:18:43 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:24.200 23:18:46 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:24.457 00:18:24.457 23:18:46 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:24.715 23:18:46 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:27.993 23:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.993 [2024-07-24 23:18:50.243869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.993 23:18:50 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:28.927 23:18:51 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:29.185 23:18:51 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 76138 00:18:35.770 0 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 76110 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76110 ']' 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76110 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76110 00:18:35.770 killing process with pid 76110 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76110' 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76110 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76110 00:18:35.770 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:35.770 [2024-07-24 23:18:40.490106] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:35.770 [2024-07-24 23:18:40.490234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76110 ] 00:18:35.770 [2024-07-24 23:18:40.625752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.770 [2024-07-24 23:18:40.747237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.770 [2024-07-24 23:18:40.822978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:35.770 Running I/O for 15 seconds... 00:18:35.770 [2024-07-24 23:18:43.362676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.362975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.362990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.363006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.363021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.770 [2024-07-24 23:18:43.363037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.770 [2024-07-24 23:18:43.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.363971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.363986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.771 [2024-07-24 23:18:43.364332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.771 [2024-07-24 23:18:43.364361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.364972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.364986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.772 [2024-07-24 23:18:43.365513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.772 [2024-07-24 23:18:43.365527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.365973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.365995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.773 [2024-07-24 23:18:43.366346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.773 [2024-07-24 23:18:43.366716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.773 [2024-07-24 23:18:43.366731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:43.366745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.366767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:43.366781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.366796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:43.366810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.366825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:43.366839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.366854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x689750 is same with the state(5) to be set 00:18:35.774 [2024-07-24 23:18:43.366871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.774 [2024-07-24 23:18:43.366881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.774 [2024-07-24 23:18:43.366891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:18:35.774 [2024-07-24 23:18:43.366904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.366976] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x689750 was disconnected and freed. reset controller. 00:18:35.774 [2024-07-24 23:18:43.366995] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:35.774 [2024-07-24 23:18:43.367061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.774 [2024-07-24 23:18:43.367082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.367098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.774 [2024-07-24 23:18:43.367117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.367145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.774 [2024-07-24 23:18:43.367160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.367175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.774 [2024-07-24 23:18:43.367188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:43.367201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.774 [2024-07-24 23:18:43.367245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x612570 (9): Bad file descriptor 00:18:35.774 [2024-07-24 23:18:43.371039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.774 [2024-07-24 23:18:43.407365] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:35.774 [2024-07-24 23:18:46.968185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.774 [2024-07-24 23:18:46.968786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.968973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.968988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.969002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.969018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.969033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.969048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.774 [2024-07-24 23:18:46.969062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.774 [2024-07-24 23:18:46.969078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.775 [2024-07-24 23:18:46.969551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.775 [2024-07-24 23:18:46.969972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.775 [2024-07-24 23:18:46.969986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.970813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.970980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.970994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.971024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.776 [2024-07-24 23:18:46.971053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.971084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.971121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.776 [2024-07-24 23:18:46.971194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.776 [2024-07-24 23:18:46.971209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.777 [2024-07-24 23:18:46.971571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.971972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.971996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.972014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.777 [2024-07-24 23:18:46.972044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681370 is same with the state(5) to be set 00:18:35.777 [2024-07-24 23:18:46.972078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79552 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.777 [2024-07-24 23:18:46.972408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.777 [2024-07-24 23:18:46.972418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:18:35.777 [2024-07-24 23:18:46.972431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.777 [2024-07-24 23:18:46.972452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.778 [2024-07-24 23:18:46.972462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.778 [2024-07-24 23:18:46.972472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:18:35.778 [2024-07-24 23:18:46.972485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:46.972498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.778 [2024-07-24 23:18:46.972509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.778 [2024-07-24 23:18:46.972519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:18:35.778 [2024-07-24 23:18:46.972532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:46.972605] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x681370 was disconnected and freed. reset controller. 00:18:35.778 [2024-07-24 23:18:46.972625] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:35.778 [2024-07-24 23:18:46.972685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.778 [2024-07-24 23:18:46.972707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:46.972722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.778 [2024-07-24 23:18:46.972735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:46.972749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.778 [2024-07-24 23:18:46.972762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:46.972777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.778 [2024-07-24 23:18:46.972790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:46.972803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.778 [2024-07-24 23:18:46.972854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x612570 (9): Bad file descriptor 00:18:35.778 [2024-07-24 23:18:46.976683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.778 [2024-07-24 23:18:47.013512] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:35.778 [2024-07-24 23:18:51.520194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.778 [2024-07-24 23:18:51.520513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.520973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.520988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.778 [2024-07-24 23:18:51.521226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.778 [2024-07-24 23:18:51.521240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.521269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.521299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.521977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.521993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.522007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.779 [2024-07-24 23:18:51.522037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.522066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.522096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.522139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.522172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.522202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.779 [2024-07-24 23:18:51.522232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.779 [2024-07-24 23:18:51.522250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.522795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.522974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.522990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.780 [2024-07-24 23:18:51.523298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.523329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.523359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.523388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.523430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.780 [2024-07-24 23:18:51.523453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.780 [2024-07-24 23:18:51.523468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.781 [2024-07-24 23:18:51.523498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.781 [2024-07-24 23:18:51.523528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.781 [2024-07-24 23:18:51.523557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.523970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.523993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.781 [2024-07-24 23:18:51.524026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x681dc0 is same with the state(5) to be set 00:18:35.781 [2024-07-24 23:18:51.524059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29040 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29496 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29504 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29512 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29520 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29528 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29536 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29544 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.781 [2024-07-24 23:18:51.524498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.781 [2024-07-24 23:18:51.524509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29552 len:8 PRP1 0x0 PRP2 0x0 00:18:35.781 [2024-07-24 23:18:51.524521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524601] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x681dc0 was disconnected and freed. reset controller. 00:18:35.781 [2024-07-24 23:18:51.524622] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:35.781 [2024-07-24 23:18:51.524681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.781 [2024-07-24 23:18:51.524702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.781 [2024-07-24 23:18:51.524746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.781 [2024-07-24 23:18:51.524773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.781 [2024-07-24 23:18:51.524810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.781 [2024-07-24 23:18:51.524823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.781 [2024-07-24 23:18:51.524872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x612570 (9): Bad file descriptor 00:18:35.782 [2024-07-24 23:18:51.528632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.782 [2024-07-24 23:18:51.568005] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:35.782 00:18:35.782 Latency(us) 00:18:35.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.782 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:35.782 Verification LBA range: start 0x0 length 0x4000 00:18:35.782 NVMe0n1 : 15.01 8962.27 35.01 231.64 0.00 13890.38 636.74 17396.83 00:18:35.782 =================================================================================================================== 00:18:35.782 Total : 8962.27 35.01 231.64 0.00 13890.38 636.74 17396.83 00:18:35.782 Received shutdown signal, test time was about 15.000000 seconds 00:18:35.782 00:18:35.782 Latency(us) 00:18:35.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.782 =================================================================================================================== 00:18:35.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76316 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76316 /var/tmp/bdevperf.sock 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76316 ']' 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.782 23:18:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:36.352 23:18:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.352 23:18:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:18:36.352 23:18:58 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:36.352 [2024-07-24 23:18:58.819377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:36.611 23:18:58 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:36.611 [2024-07-24 23:18:59.035542] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:36.611 23:18:59 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:36.869 NVMe0n1 00:18:36.869 23:18:59 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.434 00:18:37.434 23:18:59 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.692 00:18:37.692 23:18:59 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.692 23:18:59 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:37.692 23:19:00 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:37.950 23:19:00 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:41.234 23:19:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:41.234 23:19:03 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:41.234 23:19:03 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.234 23:19:03 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76393 00:18:41.234 23:19:03 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76393 00:18:42.609 0 00:18:42.609 23:19:04 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:42.609 [2024-07-24 23:18:57.669374] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:42.609 [2024-07-24 23:18:57.669548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76316 ] 00:18:42.609 [2024-07-24 23:18:57.807820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.609 [2024-07-24 23:18:57.924818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.609 [2024-07-24 23:18:57.980468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:42.609 [2024-07-24 23:19:00.366902] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:42.609 [2024-07-24 23:19:00.367062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.609 [2024-07-24 23:19:00.367087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.609 [2024-07-24 23:19:00.367106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.609 [2024-07-24 23:19:00.367119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.609 [2024-07-24 23:19:00.367151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.609 [2024-07-24 23:19:00.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.609 [2024-07-24 23:19:00.367187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:42.609 [2024-07-24 23:19:00.367200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.609 [2024-07-24 23:19:00.367214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:42.609 [2024-07-24 23:19:00.367290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:42.609 [2024-07-24 23:19:00.367323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1762570 (9): Bad file descriptor 00:18:42.609 [2024-07-24 23:19:00.373624] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:42.609 Running I/O for 1 seconds... 00:18:42.609 00:18:42.609 Latency(us) 00:18:42.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.609 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:42.609 Verification LBA range: start 0x0 length 0x4000 00:18:42.609 NVMe0n1 : 1.01 7695.04 30.06 0.00 0.00 16535.95 1362.85 15013.70 00:18:42.609 =================================================================================================================== 00:18:42.609 Total : 7695.04 30.06 0.00 0.00 16535.95 1362.85 15013.70 00:18:42.609 23:19:04 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:42.609 23:19:04 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.609 23:19:05 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:42.867 23:19:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:42.867 23:19:05 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:43.125 23:19:05 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:43.382 23:19:05 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:46.661 23:19:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:46.661 23:19:08 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76316 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76316 ']' 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76316 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76316 00:18:46.661 killing process with pid 76316 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76316' 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76316 00:18:46.661 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76316 00:18:46.919 23:19:09 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:47.177 23:19:09 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.435 23:19:09 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:47.435 23:19:09 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:47.436 rmmod nvme_tcp 00:18:47.436 rmmod nvme_fabrics 00:18:47.436 rmmod nvme_keyring 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 76052 ']' 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 76052 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76052 ']' 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76052 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76052 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:47.436 killing process with pid 76052 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76052' 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76052 00:18:47.436 23:19:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76052 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.694 23:19:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.953 23:19:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:47.953 00:18:47.953 real 0m33.066s 00:18:47.953 user 2m7.391s 00:18:47.953 sys 0m5.898s 00:18:47.953 23:19:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.953 23:19:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:47.953 ************************************ 00:18:47.953 END TEST nvmf_failover 00:18:47.953 ************************************ 00:18:47.953 23:19:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.953 23:19:10 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:47.953 23:19:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.953 23:19:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.953 23:19:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.953 ************************************ 00:18:47.953 START TEST nvmf_host_discovery 00:18:47.953 ************************************ 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:47.953 * Looking for test storage... 00:18:47.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:47.953 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:47.953 Cannot find device "nvmf_tgt_br" 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.954 Cannot find device "nvmf_tgt_br2" 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:47.954 Cannot find device "nvmf_tgt_br" 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:47.954 Cannot find device "nvmf_tgt_br2" 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:47.954 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:48.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:48.211 00:18:48.211 --- 10.0.0.2 ping statistics --- 00:18:48.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.211 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:48.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:48.211 00:18:48.211 --- 10.0.0.3 ping statistics --- 00:18:48.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.211 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:48.211 00:18:48.211 --- 10.0.0.1 ping statistics --- 00:18:48.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.211 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.211 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76669 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76669 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76669 ']' 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.469 23:19:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.469 [2024-07-24 23:19:10.779630] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:48.469 [2024-07-24 23:19:10.779745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.469 [2024-07-24 23:19:10.923328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.728 [2024-07-24 23:19:11.068394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.728 [2024-07-24 23:19:11.068469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.728 [2024-07-24 23:19:11.068483] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.728 [2024-07-24 23:19:11.068494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.728 [2024-07-24 23:19:11.068504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.728 [2024-07-24 23:19:11.068547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.728 [2024-07-24 23:19:11.146895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.295 [2024-07-24 23:19:11.763306] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.295 [2024-07-24 23:19:11.771427] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.295 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.553 null0 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.553 null1 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76701 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76701 /tmp/host.sock 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76701 ']' 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.553 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.553 23:19:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:49.553 [2024-07-24 23:19:11.877693] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:49.553 [2024-07-24 23:19:11.877827] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76701 ] 00:18:49.553 [2024-07-24 23:19:12.026931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.811 [2024-07-24 23:19:12.157937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.811 [2024-07-24 23:19:12.216557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:50.377 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.377 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:18:50.377 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.377 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:50.377 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.377 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.635 23:19:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.635 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.635 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.636 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.894 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.895 [2024-07-24 23:19:13.243769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.895 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:18:51.152 23:19:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:18:51.410 [2024-07-24 23:19:13.876481] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:51.410 [2024-07-24 23:19:13.876813] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:51.410 [2024-07-24 23:19:13.876850] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:51.410 [2024-07-24 23:19:13.882526] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:51.668 [2024-07-24 23:19:13.940202] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:51.668 [2024-07-24 23:19:13.940451] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.235 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.494 [2024-07-24 23:19:14.817601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:52.494 [2024-07-24 23:19:14.818672] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:52.494 [2024-07-24 23:19:14.818719] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:52.494 [2024-07-24 23:19:14.824661] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:52.494 [2024-07-24 23:19:14.889956] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:52.494 [2024-07-24 23:19:14.889999] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:52.494 [2024-07-24 23:19:14.890006] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:52.494 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.752 23:19:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.752 [2024-07-24 23:19:15.058458] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:52.752 [2024-07-24 23:19:15.058521] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:52.752 [2024-07-24 23:19:15.064474] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:52.752 [2024-07-24 23:19:15.064509] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:52.752 [2024-07-24 23:19:15.064632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.752 [2024-07-24 23:19:15.064671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.752 [2024-07-24 23:19:15.064685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.752 [2024-07-24 23:19:15.064695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.752 [2024-07-24 23:19:15.064705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.752 [2024-07-24 23:19:15.064715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.752 [2024-07-24 23:19:15.064725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.752 [2024-07-24 23:19:15.064735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.752 [2024-07-24 23:19:15.064744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eae610 is same with the state(5) to be set 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:52.752 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:52.753 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.010 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.010 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.011 23:19:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.383 [2024-07-24 23:19:16.507184] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:54.383 [2024-07-24 23:19:16.507247] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:54.383 [2024-07-24 23:19:16.507268] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:54.383 [2024-07-24 23:19:16.513239] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:54.383 [2024-07-24 23:19:16.574226] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:54.384 [2024-07-24 23:19:16.574467] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.384 request: 00:18:54.384 { 00:18:54.384 "name": "nvme", 00:18:54.384 "trtype": "tcp", 00:18:54.384 "traddr": "10.0.0.2", 00:18:54.384 "adrfam": "ipv4", 00:18:54.384 "trsvcid": "8009", 00:18:54.384 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:54.384 "wait_for_attach": true, 00:18:54.384 "method": "bdev_nvme_start_discovery", 00:18:54.384 "req_id": 1 00:18:54.384 } 00:18:54.384 Got JSON-RPC error response 00:18:54.384 response: 00:18:54.384 { 00:18:54.384 "code": -17, 00:18:54.384 "message": "File exists" 00:18:54.384 } 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.384 request: 00:18:54.384 { 00:18:54.384 "name": "nvme_second", 00:18:54.384 "trtype": "tcp", 00:18:54.384 "traddr": "10.0.0.2", 00:18:54.384 "adrfam": "ipv4", 00:18:54.384 "trsvcid": "8009", 00:18:54.384 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:54.384 "wait_for_attach": true, 00:18:54.384 "method": "bdev_nvme_start_discovery", 00:18:54.384 "req_id": 1 00:18:54.384 } 00:18:54.384 Got JSON-RPC error response 00:18:54.384 response: 00:18:54.384 { 00:18:54.384 "code": -17, 00:18:54.384 "message": "File exists" 00:18:54.384 } 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.384 23:19:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.767 [2024-07-24 23:19:17.855093] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.767 [2024-07-24 23:19:17.855211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb10a0 with addr=10.0.0.2, port=8010 00:18:55.767 [2024-07-24 23:19:17.855256] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:55.767 [2024-07-24 23:19:17.855291] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:55.767 [2024-07-24 23:19:17.855324] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:56.699 [2024-07-24 23:19:18.855010] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.699 [2024-07-24 23:19:18.855107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb10a0 with addr=10.0.0.2, port=8010 00:18:56.699 [2024-07-24 23:19:18.855154] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:56.699 [2024-07-24 23:19:18.855182] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:56.699 [2024-07-24 23:19:18.855194] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:57.633 [2024-07-24 23:19:19.854837] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:57.633 request: 00:18:57.633 { 00:18:57.633 "name": "nvme_second", 00:18:57.633 "trtype": "tcp", 00:18:57.633 "traddr": "10.0.0.2", 00:18:57.633 "adrfam": "ipv4", 00:18:57.633 "trsvcid": "8010", 00:18:57.633 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:57.633 "wait_for_attach": false, 00:18:57.633 "attach_timeout_ms": 3000, 00:18:57.634 "method": "bdev_nvme_start_discovery", 00:18:57.634 "req_id": 1 00:18:57.634 } 00:18:57.634 Got JSON-RPC error response 00:18:57.634 response: 00:18:57.634 { 00:18:57.634 "code": -110, 00:18:57.634 "message": "Connection timed out" 00:18:57.634 } 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76701 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.634 23:19:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.634 rmmod nvme_tcp 00:18:57.634 rmmod nvme_fabrics 00:18:57.634 rmmod nvme_keyring 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76669 ']' 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76669 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76669 ']' 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76669 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76669 00:18:57.634 killing process with pid 76669 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76669' 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76669 00:18:57.634 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76669 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:58.201 00:18:58.201 real 0m10.186s 00:18:58.201 user 0m19.528s 00:18:58.201 sys 0m2.107s 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:58.201 ************************************ 00:18:58.201 END TEST nvmf_host_discovery 00:18:58.201 ************************************ 00:18:58.201 23:19:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:58.201 23:19:20 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:58.201 23:19:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:58.201 23:19:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.201 23:19:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:58.201 ************************************ 00:18:58.201 START TEST nvmf_host_multipath_status 00:18:58.201 ************************************ 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:58.201 * Looking for test storage... 00:18:58.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.201 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:58.202 Cannot find device "nvmf_tgt_br" 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:58.202 Cannot find device "nvmf_tgt_br2" 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:58.202 Cannot find device "nvmf_tgt_br" 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:58.202 Cannot find device "nvmf_tgt_br2" 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:58.202 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:58.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:58.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:58.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:58.461 00:18:58.461 --- 10.0.0.2 ping statistics --- 00:18:58.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.461 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:58.461 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:58.461 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:18:58.461 00:18:58.461 --- 10.0.0.3 ping statistics --- 00:18:58.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.461 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:58.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:58.461 00:18:58.461 --- 10.0.0.1 ping statistics --- 00:18:58.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.461 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:58.461 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=77158 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 77158 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77158 ']' 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.720 23:19:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:58.720 [2024-07-24 23:19:20.998052] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:18:58.720 [2024-07-24 23:19:20.998124] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.720 [2024-07-24 23:19:21.134286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:58.978 [2024-07-24 23:19:21.266842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.978 [2024-07-24 23:19:21.266912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.978 [2024-07-24 23:19:21.266924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.978 [2024-07-24 23:19:21.266932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.979 [2024-07-24 23:19:21.266939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.979 [2024-07-24 23:19:21.267060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.979 [2024-07-24 23:19:21.267344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.979 [2024-07-24 23:19:21.343367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:59.544 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.544 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:18:59.544 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.544 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:59.544 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:59.802 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.802 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77158 00:18:59.802 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.065 [2024-07-24 23:19:22.336880] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.065 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:00.323 Malloc0 00:19:00.323 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:00.582 23:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.840 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.097 [2024-07-24 23:19:23.451648] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.097 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:01.355 [2024-07-24 23:19:23.679810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77210 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77210 /var/tmp/bdevperf.sock 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77210 ']' 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.355 23:19:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:02.289 23:19:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.289 23:19:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:19:02.289 23:19:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:02.547 23:19:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:02.805 Nvme0n1 00:19:02.805 23:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:03.371 Nvme0n1 00:19:03.371 23:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:03.371 23:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:05.280 23:19:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:05.281 23:19:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:05.539 23:19:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:05.797 23:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:06.733 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:06.733 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:06.733 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.733 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.991 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.992 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:06.992 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.992 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:07.250 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.250 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.250 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.250 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.508 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.508 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.508 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.508 23:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.766 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.766 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:07.766 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.766 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:08.024 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.024 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:08.024 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.024 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.282 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.282 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:08.282 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:08.540 23:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:08.799 23:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:09.733 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:09.733 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:09.733 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.733 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.991 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:09.991 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:09.991 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:09.991 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.249 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.249 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.249 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:10.249 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.508 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.508 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:10.508 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.508 23:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.766 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.766 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:10.766 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.766 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.023 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.023 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.023 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.023 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.280 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.280 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:11.280 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:11.538 23:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:11.795 23:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:12.729 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:12.729 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:12.729 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:12.729 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.987 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.987 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:12.987 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.987 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:13.244 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.244 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:13.244 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.244 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:13.502 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.502 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:13.502 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:13.502 23:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.760 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.761 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:13.761 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.761 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.019 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.019 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:14.019 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.019 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.336 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.336 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:14.336 23:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:14.621 23:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:14.879 23:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:15.814 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:15.814 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:15.814 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.814 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.072 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.072 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:16.072 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.072 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:16.638 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:16.638 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:16.638 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.638 23:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:16.638 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.638 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:16.638 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:16.638 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.896 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.896 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:16.896 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.896 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:17.154 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.154 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:17.154 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:17.154 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.413 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:17.413 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:17.413 23:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:17.671 23:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:17.929 23:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:18.863 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:18.863 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:18.863 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:18.863 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.121 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.121 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:19.121 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.121 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:19.380 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.380 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:19.380 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.380 23:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:19.638 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.638 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:19.638 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:19.638 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.896 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.896 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:19.896 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.896 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.154 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:20.154 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:20.154 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.154 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:20.719 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:20.719 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:20.719 23:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:20.719 23:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:20.977 23:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:21.928 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:21.928 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:21.928 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:21.928 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.494 23:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:22.752 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.752 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:22.752 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.752 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:23.009 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.009 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:23.009 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.009 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:23.301 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.301 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:23.301 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:23.301 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.560 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.560 23:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:23.818 23:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:23.818 23:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:24.076 23:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:24.333 23:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:25.267 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:25.267 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:25.267 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.267 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:25.528 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.528 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:25.528 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.528 23:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:25.786 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.786 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:25.786 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.786 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:26.093 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.093 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:26.093 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.093 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:26.374 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.374 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:26.374 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.374 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:26.632 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.632 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:26.632 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.632 23:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:26.889 23:19:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.889 23:19:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:26.889 23:19:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:27.147 23:19:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:27.405 23:19:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:28.338 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:28.339 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:28.339 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.339 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:28.597 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:28.597 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:28.597 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.597 23:19:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:28.854 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.854 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:28.854 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.854 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:29.111 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.111 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:29.111 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.111 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:29.369 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.369 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:29.369 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.369 23:19:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:29.627 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.627 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:29.627 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.627 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:29.884 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.884 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:29.884 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:30.143 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:30.401 23:19:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:31.340 23:19:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:31.340 23:19:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:31.340 23:19:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.340 23:19:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:31.598 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:31.598 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:31.856 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.856 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:32.114 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.114 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:32.114 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.114 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:32.372 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.372 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:32.372 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.372 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:32.630 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.630 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:32.630 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:32.630 23:19:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.904 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.904 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:32.904 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:32.904 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:33.162 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.162 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:33.162 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:33.162 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:33.419 23:19:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:34.793 23:19:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:34.793 23:19:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:34.793 23:19:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.793 23:19:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:34.793 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.793 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:34.793 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.793 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:35.052 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:35.052 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:35.052 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:35.052 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.310 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.310 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:35.310 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.310 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:35.568 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.568 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:35.568 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:35.568 23:19:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.837 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.837 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:35.837 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.837 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77210 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77210 ']' 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77210 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77210 00:19:36.133 killing process with pid 77210 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77210' 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77210 00:19:36.133 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77210 00:19:36.391 Connection closed with partial response: 00:19:36.391 00:19:36.391 00:19:36.656 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77210 00:19:36.656 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:36.656 [2024-07-24 23:19:23.746802] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:19:36.656 [2024-07-24 23:19:23.746920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77210 ] 00:19:36.656 [2024-07-24 23:19:23.884934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.656 [2024-07-24 23:19:24.037892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.656 [2024-07-24 23:19:24.111792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:36.656 Running I/O for 90 seconds... 00:19:36.656 [2024-07-24 23:19:40.075834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.656 [2024-07-24 23:19:40.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.656 [2024-07-24 23:19:40.076033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.656 [2024-07-24 23:19:40.076051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.656 [2024-07-24 23:19:40.076073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.656 [2024-07-24 23:19:40.076088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.656 [2024-07-24 23:19:40.076109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.656 [2024-07-24 23:19:40.076124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.656 [2024-07-24 23:19:40.076161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.656 [2024-07-24 23:19:40.076177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.076644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.076971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.076990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.657 [2024-07-24 23:19:40.077571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.077606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.077640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.657 [2024-07-24 23:19:40.077674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.657 [2024-07-24 23:19:40.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.077973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.078485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.078975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.078994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.079008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.079027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.079041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.079061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.658 [2024-07-24 23:19:40.079075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.079098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.658 [2024-07-24 23:19:40.079113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.658 [2024-07-24 23:19:40.079133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.079886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.079922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.079967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.079989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.080385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.080399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.081765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.081824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.659 [2024-07-24 23:19:40.081860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.081896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.081932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.081967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.659 [2024-07-24 23:19:40.081987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.659 [2024-07-24 23:19:40.082002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.082715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.082730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.660 [2024-07-24 23:19:40.083561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.660 [2024-07-24 23:19:40.083687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.660 [2024-07-24 23:19:40.083701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.083729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.083744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.083765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.083778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.083798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.083812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.083832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.083845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.083865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.083879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.083899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.083913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.084266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.084321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.084354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.661 [2024-07-24 23:19:40.084421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.661 [2024-07-24 23:19:40.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.661 [2024-07-24 23:19:40.084973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.084993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.662 [2024-07-24 23:19:40.085874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.085960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.085974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:36.662 [2024-07-24 23:19:40.086929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.662 [2024-07-24 23:19:40.086942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.086962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.086975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.086995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.663 [2024-07-24 23:19:40.087723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.087955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.087988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.088024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.088044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.088065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.088079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.088100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.088141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.088170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:36.663 [2024-07-24 23:19:40.088201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.663 [2024-07-24 23:19:40.088217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.088797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.088817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.664 [2024-07-24 23:19:40.096796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.096984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.096998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.097018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.097031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.097051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.097065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.097084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.664 [2024-07-24 23:19:40.097098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.664 [2024-07-24 23:19:40.097118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.097899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.097969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.097983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.098017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.098051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.098129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.098177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.665 [2024-07-24 23:19:40.098212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.665 [2024-07-24 23:19:40.098509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.665 [2024-07-24 23:19:40.098534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.666 [2024-07-24 23:19:40.098786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.666 [2024-07-24 23:19:40.098819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.666 [2024-07-24 23:19:40.098839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:40.098853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:40.099373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:40.099401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.880641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.880724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.880807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.880855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.880879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.880894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.880913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.880926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.880945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.880959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.880978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.880991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.881678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.881970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.881991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.882004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.882041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.882055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.882076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.667 [2024-07-24 23:19:55.882090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.882110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.667 [2024-07-24 23:19:55.882124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:36.667 [2024-07-24 23:19:55.882145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.882850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.882969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.882983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.883003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.883016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.883036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.883051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.884798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.884831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.884864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.884984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.884997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.885017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.885031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.885051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:36.668 [2024-07-24 23:19:55.885065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:36.668 [2024-07-24 23:19:55.885085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:36.668 [2024-07-24 23:19:55.885098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:36.668 Received shutdown signal, test time was about 32.891662 seconds 00:19:36.668 00:19:36.668 Latency(us) 00:19:36.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.669 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.669 Verification LBA range: start 0x0 length 0x4000 00:19:36.669 Nvme0n1 : 32.89 8661.83 33.84 0.00 0.00 14750.53 431.94 4057035.87 00:19:36.669 =================================================================================================================== 00:19:36.669 Total : 8661.83 33.84 0.00 0.00 14750.53 431.94 4057035.87 00:19:36.669 23:19:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.928 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:36.928 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:36.928 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:36.928 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.928 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.929 rmmod nvme_tcp 00:19:36.929 rmmod nvme_fabrics 00:19:36.929 rmmod nvme_keyring 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 77158 ']' 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 77158 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77158 ']' 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77158 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77158 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77158' 00:19:36.929 killing process with pid 77158 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77158 00:19:36.929 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77158 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.187 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.446 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:37.446 00:19:37.446 real 0m39.214s 00:19:37.446 user 2m5.857s 00:19:37.446 sys 0m11.866s 00:19:37.446 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:37.446 23:19:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:37.446 ************************************ 00:19:37.446 END TEST nvmf_host_multipath_status 00:19:37.446 ************************************ 00:19:37.446 23:19:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:37.446 23:19:59 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:37.446 23:19:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:37.446 23:19:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.446 23:19:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:37.446 ************************************ 00:19:37.446 START TEST nvmf_discovery_remove_ifc 00:19:37.446 ************************************ 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:37.446 * Looking for test storage... 00:19:37.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.446 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:37.447 Cannot find device "nvmf_tgt_br" 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.447 Cannot find device "nvmf_tgt_br2" 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:37.447 Cannot find device "nvmf_tgt_br" 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:37.447 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:37.705 Cannot find device "nvmf_tgt_br2" 00:19:37.705 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:37.705 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:37.705 23:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:37.705 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.705 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:37.705 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.705 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:37.705 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:37.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:37.706 00:19:37.706 --- 10.0.0.2 ping statistics --- 00:19:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.706 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:37.706 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.706 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:37.706 00:19:37.706 --- 10.0.0.3 ping statistics --- 00:19:37.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.706 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:37.706 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:37.964 00:19:37.964 --- 10.0.0.1 ping statistics --- 00:19:37.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.964 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=78002 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 78002 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 78002 ']' 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.964 23:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:37.964 [2024-07-24 23:20:00.282369] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:19:37.964 [2024-07-24 23:20:00.282475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.964 [2024-07-24 23:20:00.422392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.223 [2024-07-24 23:20:00.561163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.223 [2024-07-24 23:20:00.561248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.223 [2024-07-24 23:20:00.561260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.223 [2024-07-24 23:20:00.561268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.223 [2024-07-24 23:20:00.561276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.223 [2024-07-24 23:20:00.561304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.223 [2024-07-24 23:20:00.635749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.158 [2024-07-24 23:20:01.326557] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.158 [2024-07-24 23:20:01.334693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:39.158 null0 00:19:39.158 [2024-07-24 23:20:01.366567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.158 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=78033 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 78033 /tmp/host.sock 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 78033 ']' 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.158 23:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:39.158 [2024-07-24 23:20:01.449335] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:19:39.158 [2024-07-24 23:20:01.449610] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78033 ] 00:19:39.158 [2024-07-24 23:20:01.592437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.434 [2024-07-24 23:20:01.743415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.001 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.001 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:19:40.001 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.001 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:40.001 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.001 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:40.259 [2024-07-24 23:20:02.562789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.259 23:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.214 [2024-07-24 23:20:03.629883] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:41.214 [2024-07-24 23:20:03.629920] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:41.214 [2024-07-24 23:20:03.629939] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:41.214 [2024-07-24 23:20:03.635926] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:41.214 [2024-07-24 23:20:03.693729] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:41.214 [2024-07-24 23:20:03.693797] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:41.214 [2024-07-24 23:20:03.693830] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:41.214 [2024-07-24 23:20:03.693854] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:41.214 [2024-07-24 23:20:03.693884] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:41.214 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.214 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:41.214 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.214 [2024-07-24 23:20:03.698274] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbdcde0 was disconnected and freed. delete nvme_qpair. 00:19:41.214 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.471 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:41.472 23:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:42.404 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.663 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:42.663 23:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:43.599 23:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.536 23:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:44.536 23:20:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.794 23:20:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:44.794 23:20:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:45.729 23:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:46.698 [2024-07-24 23:20:09.120930] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:46.698 [2024-07-24 23:20:09.121001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.698 [2024-07-24 23:20:09.121019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.698 [2024-07-24 23:20:09.121033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.698 [2024-07-24 23:20:09.121042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.698 [2024-07-24 23:20:09.121051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.698 [2024-07-24 23:20:09.121061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.698 [2024-07-24 23:20:09.121070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.698 [2024-07-24 23:20:09.121079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.698 [2024-07-24 23:20:09.121088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.698 [2024-07-24 23:20:09.121097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.698 [2024-07-24 23:20:09.121106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb42ac0 is same with the state(5) to be set 00:19:46.698 [2024-07-24 23:20:09.130924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb42ac0 (9): Bad file descriptor 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.698 [2024-07-24 23:20:09.140947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:46.698 23:20:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:48.072 [2024-07-24 23:20:10.147279] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:48.072 [2024-07-24 23:20:10.147437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb42ac0 with addr=10.0.0.2, port=4420 00:19:48.072 [2024-07-24 23:20:10.147480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb42ac0 is same with the state(5) to be set 00:19:48.072 [2024-07-24 23:20:10.147562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb42ac0 (9): Bad file descriptor 00:19:48.072 [2024-07-24 23:20:10.147690] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:48.072 [2024-07-24 23:20:10.147744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:48.072 [2024-07-24 23:20:10.147765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:48.072 [2024-07-24 23:20:10.147795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:48.073 [2024-07-24 23:20:10.147845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.073 [2024-07-24 23:20:10.147871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:48.073 23:20:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:49.006 [2024-07-24 23:20:11.147968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:49.006 [2024-07-24 23:20:11.148027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:49.006 [2024-07-24 23:20:11.148041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:49.006 [2024-07-24 23:20:11.148053] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:49.006 [2024-07-24 23:20:11.148084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:49.006 [2024-07-24 23:20:11.148122] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:49.006 [2024-07-24 23:20:11.148205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.006 [2024-07-24 23:20:11.148225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.006 [2024-07-24 23:20:11.148240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.006 [2024-07-24 23:20:11.148250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.006 [2024-07-24 23:20:11.148261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.006 [2024-07-24 23:20:11.148271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.006 [2024-07-24 23:20:11.148281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.006 [2024-07-24 23:20:11.148291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.007 [2024-07-24 23:20:11.148302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.007 [2024-07-24 23:20:11.148311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.007 [2024-07-24 23:20:11.148321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:49.007 [2024-07-24 23:20:11.148365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb46860 (9): Bad file descriptor 00:19:49.007 [2024-07-24 23:20:11.149353] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:49.007 [2024-07-24 23:20:11.149374] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:49.007 23:20:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:49.941 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.199 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:50.199 23:20:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:50.804 [2024-07-24 23:20:13.154279] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:50.804 [2024-07-24 23:20:13.154307] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:50.804 [2024-07-24 23:20:13.154326] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:50.804 [2024-07-24 23:20:13.160353] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:50.804 [2024-07-24 23:20:13.217233] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:50.804 [2024-07-24 23:20:13.217432] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:50.804 [2024-07-24 23:20:13.217501] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:50.804 [2024-07-24 23:20:13.217627] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:50.804 [2024-07-24 23:20:13.217690] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:50.804 [2024-07-24 23:20:13.222985] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbe9d90 was disconnected and freed. delete nvme_qpair. 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 78033 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 78033 ']' 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 78033 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78033 00:19:51.062 killing process with pid 78033 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78033' 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 78033 00:19:51.062 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 78033 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.629 rmmod nvme_tcp 00:19:51.629 rmmod nvme_fabrics 00:19:51.629 rmmod nvme_keyring 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 78002 ']' 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 78002 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 78002 ']' 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 78002 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78002 00:19:51.629 killing process with pid 78002 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78002' 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 78002 00:19:51.629 23:20:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 78002 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:51.887 ************************************ 00:19:51.887 END TEST nvmf_discovery_remove_ifc 00:19:51.887 ************************************ 00:19:51.887 00:19:51.887 real 0m14.552s 00:19:51.887 user 0m25.260s 00:19:51.887 sys 0m2.547s 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.887 23:20:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:51.887 23:20:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:51.887 23:20:14 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:51.887 23:20:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:51.887 23:20:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.887 23:20:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.887 ************************************ 00:19:51.887 START TEST nvmf_identify_kernel_target 00:19:51.887 ************************************ 00:19:51.887 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:52.146 * Looking for test storage... 00:19:52.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:52.146 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:52.147 Cannot find device "nvmf_tgt_br" 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:52.147 Cannot find device "nvmf_tgt_br2" 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:52.147 Cannot find device "nvmf_tgt_br" 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:52.147 Cannot find device "nvmf_tgt_br2" 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:52.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:52.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:52.147 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:19:52.406 00:19:52.406 --- 10.0.0.2 ping statistics --- 00:19:52.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.406 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.406 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.406 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:52.406 00:19:52.406 --- 10.0.0.3 ping statistics --- 00:19:52.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.406 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:52.406 00:19:52.406 --- 10.0.0.1 ping statistics --- 00:19:52.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.406 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.406 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:52.407 23:20:14 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:52.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:52.973 Waiting for block devices as requested 00:19:52.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:52.973 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:53.231 No valid GPT data, bailing 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:53.231 No valid GPT data, bailing 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:53.231 No valid GPT data, bailing 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:53.231 No valid GPT data, bailing 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:53.231 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -a 10.0.0.1 -t tcp -s 4420 00:19:53.490 00:19:53.490 Discovery Log Number of Records 2, Generation counter 2 00:19:53.490 =====Discovery Log Entry 0====== 00:19:53.490 trtype: tcp 00:19:53.490 adrfam: ipv4 00:19:53.490 subtype: current discovery subsystem 00:19:53.490 treq: not specified, sq flow control disable supported 00:19:53.490 portid: 1 00:19:53.490 trsvcid: 4420 00:19:53.490 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:53.490 traddr: 10.0.0.1 00:19:53.490 eflags: none 00:19:53.490 sectype: none 00:19:53.490 =====Discovery Log Entry 1====== 00:19:53.490 trtype: tcp 00:19:53.490 adrfam: ipv4 00:19:53.490 subtype: nvme subsystem 00:19:53.490 treq: not specified, sq flow control disable supported 00:19:53.490 portid: 1 00:19:53.490 trsvcid: 4420 00:19:53.490 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:53.490 traddr: 10.0.0.1 00:19:53.490 eflags: none 00:19:53.490 sectype: none 00:19:53.490 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:53.490 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:53.490 ===================================================== 00:19:53.490 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:53.490 ===================================================== 00:19:53.490 Controller Capabilities/Features 00:19:53.490 ================================ 00:19:53.490 Vendor ID: 0000 00:19:53.490 Subsystem Vendor ID: 0000 00:19:53.490 Serial Number: 70e3a874cde91e819db8 00:19:53.490 Model Number: Linux 00:19:53.490 Firmware Version: 6.7.0-68 00:19:53.490 Recommended Arb Burst: 0 00:19:53.490 IEEE OUI Identifier: 00 00 00 00:19:53.490 Multi-path I/O 00:19:53.490 May have multiple subsystem ports: No 00:19:53.490 May have multiple controllers: No 00:19:53.490 Associated with SR-IOV VF: No 00:19:53.490 Max Data Transfer Size: Unlimited 00:19:53.490 Max Number of Namespaces: 0 00:19:53.490 Max Number of I/O Queues: 1024 00:19:53.490 NVMe Specification Version (VS): 1.3 00:19:53.490 NVMe Specification Version (Identify): 1.3 00:19:53.490 Maximum Queue Entries: 1024 00:19:53.490 Contiguous Queues Required: No 00:19:53.490 Arbitration Mechanisms Supported 00:19:53.490 Weighted Round Robin: Not Supported 00:19:53.490 Vendor Specific: Not Supported 00:19:53.490 Reset Timeout: 7500 ms 00:19:53.490 Doorbell Stride: 4 bytes 00:19:53.490 NVM Subsystem Reset: Not Supported 00:19:53.490 Command Sets Supported 00:19:53.490 NVM Command Set: Supported 00:19:53.490 Boot Partition: Not Supported 00:19:53.490 Memory Page Size Minimum: 4096 bytes 00:19:53.490 Memory Page Size Maximum: 4096 bytes 00:19:53.490 Persistent Memory Region: Not Supported 00:19:53.490 Optional Asynchronous Events Supported 00:19:53.490 Namespace Attribute Notices: Not Supported 00:19:53.490 Firmware Activation Notices: Not Supported 00:19:53.490 ANA Change Notices: Not Supported 00:19:53.490 PLE Aggregate Log Change Notices: Not Supported 00:19:53.490 LBA Status Info Alert Notices: Not Supported 00:19:53.490 EGE Aggregate Log Change Notices: Not Supported 00:19:53.490 Normal NVM Subsystem Shutdown event: Not Supported 00:19:53.490 Zone Descriptor Change Notices: Not Supported 00:19:53.490 Discovery Log Change Notices: Supported 00:19:53.490 Controller Attributes 00:19:53.490 128-bit Host Identifier: Not Supported 00:19:53.490 Non-Operational Permissive Mode: Not Supported 00:19:53.490 NVM Sets: Not Supported 00:19:53.490 Read Recovery Levels: Not Supported 00:19:53.491 Endurance Groups: Not Supported 00:19:53.491 Predictable Latency Mode: Not Supported 00:19:53.491 Traffic Based Keep ALive: Not Supported 00:19:53.491 Namespace Granularity: Not Supported 00:19:53.491 SQ Associations: Not Supported 00:19:53.491 UUID List: Not Supported 00:19:53.491 Multi-Domain Subsystem: Not Supported 00:19:53.491 Fixed Capacity Management: Not Supported 00:19:53.491 Variable Capacity Management: Not Supported 00:19:53.491 Delete Endurance Group: Not Supported 00:19:53.491 Delete NVM Set: Not Supported 00:19:53.491 Extended LBA Formats Supported: Not Supported 00:19:53.491 Flexible Data Placement Supported: Not Supported 00:19:53.491 00:19:53.491 Controller Memory Buffer Support 00:19:53.491 ================================ 00:19:53.491 Supported: No 00:19:53.491 00:19:53.491 Persistent Memory Region Support 00:19:53.491 ================================ 00:19:53.491 Supported: No 00:19:53.491 00:19:53.491 Admin Command Set Attributes 00:19:53.491 ============================ 00:19:53.491 Security Send/Receive: Not Supported 00:19:53.491 Format NVM: Not Supported 00:19:53.491 Firmware Activate/Download: Not Supported 00:19:53.491 Namespace Management: Not Supported 00:19:53.491 Device Self-Test: Not Supported 00:19:53.491 Directives: Not Supported 00:19:53.491 NVMe-MI: Not Supported 00:19:53.491 Virtualization Management: Not Supported 00:19:53.491 Doorbell Buffer Config: Not Supported 00:19:53.491 Get LBA Status Capability: Not Supported 00:19:53.491 Command & Feature Lockdown Capability: Not Supported 00:19:53.491 Abort Command Limit: 1 00:19:53.491 Async Event Request Limit: 1 00:19:53.491 Number of Firmware Slots: N/A 00:19:53.491 Firmware Slot 1 Read-Only: N/A 00:19:53.491 Firmware Activation Without Reset: N/A 00:19:53.491 Multiple Update Detection Support: N/A 00:19:53.491 Firmware Update Granularity: No Information Provided 00:19:53.491 Per-Namespace SMART Log: No 00:19:53.491 Asymmetric Namespace Access Log Page: Not Supported 00:19:53.491 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:53.491 Command Effects Log Page: Not Supported 00:19:53.491 Get Log Page Extended Data: Supported 00:19:53.491 Telemetry Log Pages: Not Supported 00:19:53.491 Persistent Event Log Pages: Not Supported 00:19:53.491 Supported Log Pages Log Page: May Support 00:19:53.491 Commands Supported & Effects Log Page: Not Supported 00:19:53.491 Feature Identifiers & Effects Log Page:May Support 00:19:53.491 NVMe-MI Commands & Effects Log Page: May Support 00:19:53.491 Data Area 4 for Telemetry Log: Not Supported 00:19:53.491 Error Log Page Entries Supported: 1 00:19:53.491 Keep Alive: Not Supported 00:19:53.491 00:19:53.491 NVM Command Set Attributes 00:19:53.491 ========================== 00:19:53.491 Submission Queue Entry Size 00:19:53.491 Max: 1 00:19:53.491 Min: 1 00:19:53.491 Completion Queue Entry Size 00:19:53.491 Max: 1 00:19:53.491 Min: 1 00:19:53.491 Number of Namespaces: 0 00:19:53.491 Compare Command: Not Supported 00:19:53.491 Write Uncorrectable Command: Not Supported 00:19:53.491 Dataset Management Command: Not Supported 00:19:53.491 Write Zeroes Command: Not Supported 00:19:53.491 Set Features Save Field: Not Supported 00:19:53.491 Reservations: Not Supported 00:19:53.491 Timestamp: Not Supported 00:19:53.491 Copy: Not Supported 00:19:53.491 Volatile Write Cache: Not Present 00:19:53.491 Atomic Write Unit (Normal): 1 00:19:53.491 Atomic Write Unit (PFail): 1 00:19:53.491 Atomic Compare & Write Unit: 1 00:19:53.491 Fused Compare & Write: Not Supported 00:19:53.491 Scatter-Gather List 00:19:53.491 SGL Command Set: Supported 00:19:53.491 SGL Keyed: Not Supported 00:19:53.491 SGL Bit Bucket Descriptor: Not Supported 00:19:53.491 SGL Metadata Pointer: Not Supported 00:19:53.491 Oversized SGL: Not Supported 00:19:53.491 SGL Metadata Address: Not Supported 00:19:53.491 SGL Offset: Supported 00:19:53.491 Transport SGL Data Block: Not Supported 00:19:53.491 Replay Protected Memory Block: Not Supported 00:19:53.491 00:19:53.491 Firmware Slot Information 00:19:53.491 ========================= 00:19:53.491 Active slot: 0 00:19:53.491 00:19:53.491 00:19:53.491 Error Log 00:19:53.491 ========= 00:19:53.491 00:19:53.491 Active Namespaces 00:19:53.491 ================= 00:19:53.491 Discovery Log Page 00:19:53.491 ================== 00:19:53.491 Generation Counter: 2 00:19:53.491 Number of Records: 2 00:19:53.491 Record Format: 0 00:19:53.491 00:19:53.491 Discovery Log Entry 0 00:19:53.491 ---------------------- 00:19:53.491 Transport Type: 3 (TCP) 00:19:53.491 Address Family: 1 (IPv4) 00:19:53.491 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:53.491 Entry Flags: 00:19:53.491 Duplicate Returned Information: 0 00:19:53.491 Explicit Persistent Connection Support for Discovery: 0 00:19:53.491 Transport Requirements: 00:19:53.491 Secure Channel: Not Specified 00:19:53.491 Port ID: 1 (0x0001) 00:19:53.491 Controller ID: 65535 (0xffff) 00:19:53.491 Admin Max SQ Size: 32 00:19:53.491 Transport Service Identifier: 4420 00:19:53.491 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:53.491 Transport Address: 10.0.0.1 00:19:53.491 Discovery Log Entry 1 00:19:53.491 ---------------------- 00:19:53.491 Transport Type: 3 (TCP) 00:19:53.491 Address Family: 1 (IPv4) 00:19:53.491 Subsystem Type: 2 (NVM Subsystem) 00:19:53.491 Entry Flags: 00:19:53.491 Duplicate Returned Information: 0 00:19:53.491 Explicit Persistent Connection Support for Discovery: 0 00:19:53.491 Transport Requirements: 00:19:53.491 Secure Channel: Not Specified 00:19:53.491 Port ID: 1 (0x0001) 00:19:53.491 Controller ID: 65535 (0xffff) 00:19:53.491 Admin Max SQ Size: 32 00:19:53.491 Transport Service Identifier: 4420 00:19:53.491 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:53.491 Transport Address: 10.0.0.1 00:19:53.491 23:20:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:53.751 get_feature(0x01) failed 00:19:53.751 get_feature(0x02) failed 00:19:53.751 get_feature(0x04) failed 00:19:53.751 ===================================================== 00:19:53.751 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:53.751 ===================================================== 00:19:53.751 Controller Capabilities/Features 00:19:53.751 ================================ 00:19:53.751 Vendor ID: 0000 00:19:53.751 Subsystem Vendor ID: 0000 00:19:53.751 Serial Number: f856435cfcf01f63a0a6 00:19:53.751 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:53.751 Firmware Version: 6.7.0-68 00:19:53.751 Recommended Arb Burst: 6 00:19:53.751 IEEE OUI Identifier: 00 00 00 00:19:53.751 Multi-path I/O 00:19:53.751 May have multiple subsystem ports: Yes 00:19:53.751 May have multiple controllers: Yes 00:19:53.751 Associated with SR-IOV VF: No 00:19:53.751 Max Data Transfer Size: Unlimited 00:19:53.751 Max Number of Namespaces: 1024 00:19:53.751 Max Number of I/O Queues: 128 00:19:53.751 NVMe Specification Version (VS): 1.3 00:19:53.751 NVMe Specification Version (Identify): 1.3 00:19:53.751 Maximum Queue Entries: 1024 00:19:53.751 Contiguous Queues Required: No 00:19:53.751 Arbitration Mechanisms Supported 00:19:53.751 Weighted Round Robin: Not Supported 00:19:53.751 Vendor Specific: Not Supported 00:19:53.751 Reset Timeout: 7500 ms 00:19:53.751 Doorbell Stride: 4 bytes 00:19:53.751 NVM Subsystem Reset: Not Supported 00:19:53.751 Command Sets Supported 00:19:53.751 NVM Command Set: Supported 00:19:53.751 Boot Partition: Not Supported 00:19:53.751 Memory Page Size Minimum: 4096 bytes 00:19:53.751 Memory Page Size Maximum: 4096 bytes 00:19:53.751 Persistent Memory Region: Not Supported 00:19:53.751 Optional Asynchronous Events Supported 00:19:53.751 Namespace Attribute Notices: Supported 00:19:53.751 Firmware Activation Notices: Not Supported 00:19:53.751 ANA Change Notices: Supported 00:19:53.751 PLE Aggregate Log Change Notices: Not Supported 00:19:53.751 LBA Status Info Alert Notices: Not Supported 00:19:53.751 EGE Aggregate Log Change Notices: Not Supported 00:19:53.751 Normal NVM Subsystem Shutdown event: Not Supported 00:19:53.751 Zone Descriptor Change Notices: Not Supported 00:19:53.751 Discovery Log Change Notices: Not Supported 00:19:53.751 Controller Attributes 00:19:53.751 128-bit Host Identifier: Supported 00:19:53.751 Non-Operational Permissive Mode: Not Supported 00:19:53.751 NVM Sets: Not Supported 00:19:53.751 Read Recovery Levels: Not Supported 00:19:53.751 Endurance Groups: Not Supported 00:19:53.751 Predictable Latency Mode: Not Supported 00:19:53.751 Traffic Based Keep ALive: Supported 00:19:53.751 Namespace Granularity: Not Supported 00:19:53.751 SQ Associations: Not Supported 00:19:53.751 UUID List: Not Supported 00:19:53.751 Multi-Domain Subsystem: Not Supported 00:19:53.751 Fixed Capacity Management: Not Supported 00:19:53.751 Variable Capacity Management: Not Supported 00:19:53.751 Delete Endurance Group: Not Supported 00:19:53.751 Delete NVM Set: Not Supported 00:19:53.751 Extended LBA Formats Supported: Not Supported 00:19:53.751 Flexible Data Placement Supported: Not Supported 00:19:53.751 00:19:53.751 Controller Memory Buffer Support 00:19:53.751 ================================ 00:19:53.751 Supported: No 00:19:53.751 00:19:53.751 Persistent Memory Region Support 00:19:53.751 ================================ 00:19:53.751 Supported: No 00:19:53.752 00:19:53.752 Admin Command Set Attributes 00:19:53.752 ============================ 00:19:53.752 Security Send/Receive: Not Supported 00:19:53.752 Format NVM: Not Supported 00:19:53.752 Firmware Activate/Download: Not Supported 00:19:53.752 Namespace Management: Not Supported 00:19:53.752 Device Self-Test: Not Supported 00:19:53.752 Directives: Not Supported 00:19:53.752 NVMe-MI: Not Supported 00:19:53.752 Virtualization Management: Not Supported 00:19:53.752 Doorbell Buffer Config: Not Supported 00:19:53.752 Get LBA Status Capability: Not Supported 00:19:53.752 Command & Feature Lockdown Capability: Not Supported 00:19:53.752 Abort Command Limit: 4 00:19:53.752 Async Event Request Limit: 4 00:19:53.752 Number of Firmware Slots: N/A 00:19:53.752 Firmware Slot 1 Read-Only: N/A 00:19:53.752 Firmware Activation Without Reset: N/A 00:19:53.752 Multiple Update Detection Support: N/A 00:19:53.752 Firmware Update Granularity: No Information Provided 00:19:53.752 Per-Namespace SMART Log: Yes 00:19:53.752 Asymmetric Namespace Access Log Page: Supported 00:19:53.752 ANA Transition Time : 10 sec 00:19:53.752 00:19:53.752 Asymmetric Namespace Access Capabilities 00:19:53.752 ANA Optimized State : Supported 00:19:53.752 ANA Non-Optimized State : Supported 00:19:53.752 ANA Inaccessible State : Supported 00:19:53.752 ANA Persistent Loss State : Supported 00:19:53.752 ANA Change State : Supported 00:19:53.752 ANAGRPID is not changed : No 00:19:53.752 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:53.752 00:19:53.752 ANA Group Identifier Maximum : 128 00:19:53.752 Number of ANA Group Identifiers : 128 00:19:53.752 Max Number of Allowed Namespaces : 1024 00:19:53.752 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:53.752 Command Effects Log Page: Supported 00:19:53.752 Get Log Page Extended Data: Supported 00:19:53.752 Telemetry Log Pages: Not Supported 00:19:53.752 Persistent Event Log Pages: Not Supported 00:19:53.752 Supported Log Pages Log Page: May Support 00:19:53.752 Commands Supported & Effects Log Page: Not Supported 00:19:53.752 Feature Identifiers & Effects Log Page:May Support 00:19:53.752 NVMe-MI Commands & Effects Log Page: May Support 00:19:53.752 Data Area 4 for Telemetry Log: Not Supported 00:19:53.752 Error Log Page Entries Supported: 128 00:19:53.752 Keep Alive: Supported 00:19:53.752 Keep Alive Granularity: 1000 ms 00:19:53.752 00:19:53.752 NVM Command Set Attributes 00:19:53.752 ========================== 00:19:53.752 Submission Queue Entry Size 00:19:53.752 Max: 64 00:19:53.752 Min: 64 00:19:53.752 Completion Queue Entry Size 00:19:53.752 Max: 16 00:19:53.752 Min: 16 00:19:53.752 Number of Namespaces: 1024 00:19:53.752 Compare Command: Not Supported 00:19:53.752 Write Uncorrectable Command: Not Supported 00:19:53.752 Dataset Management Command: Supported 00:19:53.752 Write Zeroes Command: Supported 00:19:53.752 Set Features Save Field: Not Supported 00:19:53.752 Reservations: Not Supported 00:19:53.752 Timestamp: Not Supported 00:19:53.752 Copy: Not Supported 00:19:53.752 Volatile Write Cache: Present 00:19:53.752 Atomic Write Unit (Normal): 1 00:19:53.752 Atomic Write Unit (PFail): 1 00:19:53.752 Atomic Compare & Write Unit: 1 00:19:53.752 Fused Compare & Write: Not Supported 00:19:53.752 Scatter-Gather List 00:19:53.752 SGL Command Set: Supported 00:19:53.752 SGL Keyed: Not Supported 00:19:53.752 SGL Bit Bucket Descriptor: Not Supported 00:19:53.752 SGL Metadata Pointer: Not Supported 00:19:53.752 Oversized SGL: Not Supported 00:19:53.752 SGL Metadata Address: Not Supported 00:19:53.752 SGL Offset: Supported 00:19:53.752 Transport SGL Data Block: Not Supported 00:19:53.752 Replay Protected Memory Block: Not Supported 00:19:53.752 00:19:53.752 Firmware Slot Information 00:19:53.752 ========================= 00:19:53.752 Active slot: 0 00:19:53.752 00:19:53.752 Asymmetric Namespace Access 00:19:53.752 =========================== 00:19:53.752 Change Count : 0 00:19:53.752 Number of ANA Group Descriptors : 1 00:19:53.752 ANA Group Descriptor : 0 00:19:53.752 ANA Group ID : 1 00:19:53.752 Number of NSID Values : 1 00:19:53.752 Change Count : 0 00:19:53.752 ANA State : 1 00:19:53.752 Namespace Identifier : 1 00:19:53.752 00:19:53.752 Commands Supported and Effects 00:19:53.752 ============================== 00:19:53.752 Admin Commands 00:19:53.752 -------------- 00:19:53.752 Get Log Page (02h): Supported 00:19:53.752 Identify (06h): Supported 00:19:53.752 Abort (08h): Supported 00:19:53.752 Set Features (09h): Supported 00:19:53.752 Get Features (0Ah): Supported 00:19:53.752 Asynchronous Event Request (0Ch): Supported 00:19:53.752 Keep Alive (18h): Supported 00:19:53.752 I/O Commands 00:19:53.752 ------------ 00:19:53.752 Flush (00h): Supported 00:19:53.752 Write (01h): Supported LBA-Change 00:19:53.752 Read (02h): Supported 00:19:53.752 Write Zeroes (08h): Supported LBA-Change 00:19:53.752 Dataset Management (09h): Supported 00:19:53.752 00:19:53.752 Error Log 00:19:53.752 ========= 00:19:53.752 Entry: 0 00:19:53.752 Error Count: 0x3 00:19:53.752 Submission Queue Id: 0x0 00:19:53.752 Command Id: 0x5 00:19:53.752 Phase Bit: 0 00:19:53.752 Status Code: 0x2 00:19:53.752 Status Code Type: 0x0 00:19:53.752 Do Not Retry: 1 00:19:53.752 Error Location: 0x28 00:19:53.752 LBA: 0x0 00:19:53.752 Namespace: 0x0 00:19:53.752 Vendor Log Page: 0x0 00:19:53.752 ----------- 00:19:53.752 Entry: 1 00:19:53.752 Error Count: 0x2 00:19:53.752 Submission Queue Id: 0x0 00:19:53.752 Command Id: 0x5 00:19:53.752 Phase Bit: 0 00:19:53.752 Status Code: 0x2 00:19:53.752 Status Code Type: 0x0 00:19:53.752 Do Not Retry: 1 00:19:53.752 Error Location: 0x28 00:19:53.752 LBA: 0x0 00:19:53.752 Namespace: 0x0 00:19:53.752 Vendor Log Page: 0x0 00:19:53.752 ----------- 00:19:53.752 Entry: 2 00:19:53.752 Error Count: 0x1 00:19:53.752 Submission Queue Id: 0x0 00:19:53.752 Command Id: 0x4 00:19:53.752 Phase Bit: 0 00:19:53.752 Status Code: 0x2 00:19:53.752 Status Code Type: 0x0 00:19:53.752 Do Not Retry: 1 00:19:53.752 Error Location: 0x28 00:19:53.752 LBA: 0x0 00:19:53.752 Namespace: 0x0 00:19:53.752 Vendor Log Page: 0x0 00:19:53.752 00:19:53.752 Number of Queues 00:19:53.752 ================ 00:19:53.752 Number of I/O Submission Queues: 128 00:19:53.752 Number of I/O Completion Queues: 128 00:19:53.752 00:19:53.752 ZNS Specific Controller Data 00:19:53.752 ============================ 00:19:53.752 Zone Append Size Limit: 0 00:19:53.752 00:19:53.752 00:19:53.752 Active Namespaces 00:19:53.752 ================= 00:19:53.752 get_feature(0x05) failed 00:19:53.752 Namespace ID:1 00:19:53.752 Command Set Identifier: NVM (00h) 00:19:53.752 Deallocate: Supported 00:19:53.752 Deallocated/Unwritten Error: Not Supported 00:19:53.752 Deallocated Read Value: Unknown 00:19:53.752 Deallocate in Write Zeroes: Not Supported 00:19:53.752 Deallocated Guard Field: 0xFFFF 00:19:53.752 Flush: Supported 00:19:53.752 Reservation: Not Supported 00:19:53.752 Namespace Sharing Capabilities: Multiple Controllers 00:19:53.752 Size (in LBAs): 1310720 (5GiB) 00:19:53.752 Capacity (in LBAs): 1310720 (5GiB) 00:19:53.752 Utilization (in LBAs): 1310720 (5GiB) 00:19:53.752 UUID: 84b9c002-c265-490b-b7e6-1aa6ed5c411e 00:19:53.752 Thin Provisioning: Not Supported 00:19:53.752 Per-NS Atomic Units: Yes 00:19:53.752 Atomic Boundary Size (Normal): 0 00:19:53.752 Atomic Boundary Size (PFail): 0 00:19:53.752 Atomic Boundary Offset: 0 00:19:53.752 NGUID/EUI64 Never Reused: No 00:19:53.752 ANA group ID: 1 00:19:53.752 Namespace Write Protected: No 00:19:53.752 Number of LBA Formats: 1 00:19:53.752 Current LBA Format: LBA Format #00 00:19:53.752 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:53.752 00:19:53.752 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:53.752 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.752 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:53.752 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.752 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:53.752 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.753 rmmod nvme_tcp 00:19:53.753 rmmod nvme_fabrics 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.753 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:54.011 23:20:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:54.578 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:54.836 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:54.836 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:54.836 00:19:54.836 real 0m2.819s 00:19:54.836 user 0m0.951s 00:19:54.836 sys 0m1.370s 00:19:54.836 23:20:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.836 23:20:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.836 ************************************ 00:19:54.836 END TEST nvmf_identify_kernel_target 00:19:54.836 ************************************ 00:19:54.836 23:20:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:54.836 23:20:17 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:54.836 23:20:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:54.836 23:20:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.836 23:20:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:54.836 ************************************ 00:19:54.836 START TEST nvmf_auth_host 00:19:54.836 ************************************ 00:19:54.836 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:54.836 * Looking for test storage... 00:19:55.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:55.095 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.095 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:55.095 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.095 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.095 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:55.096 Cannot find device "nvmf_tgt_br" 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.096 Cannot find device "nvmf_tgt_br2" 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:55.096 Cannot find device "nvmf_tgt_br" 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:55.096 Cannot find device "nvmf_tgt_br2" 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.096 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:55.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:19:55.355 00:19:55.355 --- 10.0.0.2 ping statistics --- 00:19:55.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.355 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:55.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:55.355 00:19:55.355 --- 10.0.0.3 ping statistics --- 00:19:55.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.355 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:55.355 00:19:55.355 --- 10.0.0.1 ping statistics --- 00:19:55.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.355 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78918 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78918 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78918 ']' 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.355 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.356 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.356 23:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=200f234e8c82cb1d1e63d787e392ff56 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.02t 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 200f234e8c82cb1d1e63d787e392ff56 0 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 200f234e8c82cb1d1e63d787e392ff56 0 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=200f234e8c82cb1d1e63d787e392ff56 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:56.304 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.02t 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.02t 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.02t 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=75e34faf13ed963471f8a459ff24cfd6c9a116bd6a1a7559d9f423177f1ac6dc 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Zt0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 75e34faf13ed963471f8a459ff24cfd6c9a116bd6a1a7559d9f423177f1ac6dc 3 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 75e34faf13ed963471f8a459ff24cfd6c9a116bd6a1a7559d9f423177f1ac6dc 3 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=75e34faf13ed963471f8a459ff24cfd6c9a116bd6a1a7559d9f423177f1ac6dc 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Zt0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Zt0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Zt0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d903035630f291b1037e88222fc19f3f735de2a941494d6 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BMn 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d903035630f291b1037e88222fc19f3f735de2a941494d6 0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d903035630f291b1037e88222fc19f3f735de2a941494d6 0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d903035630f291b1037e88222fc19f3f735de2a941494d6 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:56.563 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BMn 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BMn 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BMn 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=634dd865be817af39ca39fc595b1a22aeb2973aa706bdabb 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fQK 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 634dd865be817af39ca39fc595b1a22aeb2973aa706bdabb 2 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 634dd865be817af39ca39fc595b1a22aeb2973aa706bdabb 2 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=634dd865be817af39ca39fc595b1a22aeb2973aa706bdabb 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:56.564 23:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fQK 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fQK 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fQK 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba1bd0d4e0bc74e4a8e49e02325af6f1 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5gS 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba1bd0d4e0bc74e4a8e49e02325af6f1 1 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba1bd0d4e0bc74e4a8e49e02325af6f1 1 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba1bd0d4e0bc74e4a8e49e02325af6f1 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:56.564 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5gS 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5gS 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5gS 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=58e0f7778386a18ea4ab36a794ac81a6 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.z0s 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 58e0f7778386a18ea4ab36a794ac81a6 1 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 58e0f7778386a18ea4ab36a794ac81a6 1 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=58e0f7778386a18ea4ab36a794ac81a6 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.z0s 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.z0s 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.z0s 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36fb09eb1715128716bf3e21b85bf6d4d7b4c8ea2b9b76d2 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.G99 00:19:56.823 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36fb09eb1715128716bf3e21b85bf6d4d7b4c8ea2b9b76d2 2 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36fb09eb1715128716bf3e21b85bf6d4d7b4c8ea2b9b76d2 2 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36fb09eb1715128716bf3e21b85bf6d4d7b4c8ea2b9b76d2 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.G99 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.G99 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.G99 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=42f3814f2d719430eb2b8ac1e8b7932e 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7B0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 42f3814f2d719430eb2b8ac1e8b7932e 0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 42f3814f2d719430eb2b8ac1e8b7932e 0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=42f3814f2d719430eb2b8ac1e8b7932e 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7B0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7B0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7B0 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7bd685836e96f3a5de7a132a9008978041b73e04cb802e0445af56960523271d 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.g8s 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7bd685836e96f3a5de7a132a9008978041b73e04cb802e0445af56960523271d 3 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7bd685836e96f3a5de7a132a9008978041b73e04cb802e0445af56960523271d 3 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7bd685836e96f3a5de7a132a9008978041b73e04cb802e0445af56960523271d 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:19:56.824 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.g8s 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.g8s 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.g8s 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78918 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78918 ']' 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.083 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.02t 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Zt0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Zt0 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BMn 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fQK ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fQK 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5gS 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.z0s ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z0s 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.G99 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7B0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7B0 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.g8s 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.342 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:57.343 23:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:57.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:57.860 Waiting for block devices as requested 00:19:57.860 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:57.860 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:58.427 No valid GPT data, bailing 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:58.427 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:58.686 No valid GPT data, bailing 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:58.686 23:20:20 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:58.686 No valid GPT data, bailing 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:58.686 No valid GPT data, bailing 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:58.686 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -a 10.0.0.1 -t tcp -s 4420 00:19:58.945 00:19:58.945 Discovery Log Number of Records 2, Generation counter 2 00:19:58.945 =====Discovery Log Entry 0====== 00:19:58.945 trtype: tcp 00:19:58.945 adrfam: ipv4 00:19:58.945 subtype: current discovery subsystem 00:19:58.945 treq: not specified, sq flow control disable supported 00:19:58.945 portid: 1 00:19:58.945 trsvcid: 4420 00:19:58.945 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:58.945 traddr: 10.0.0.1 00:19:58.945 eflags: none 00:19:58.945 sectype: none 00:19:58.945 =====Discovery Log Entry 1====== 00:19:58.945 trtype: tcp 00:19:58.945 adrfam: ipv4 00:19:58.945 subtype: nvme subsystem 00:19:58.945 treq: not specified, sq flow control disable supported 00:19:58.945 portid: 1 00:19:58.945 trsvcid: 4420 00:19:58.945 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:58.945 traddr: 10.0.0.1 00:19:58.945 eflags: none 00:19:58.945 sectype: none 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.945 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.946 nvme0n1 00:19:58.946 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 nvme0n1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.205 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.463 nvme0n1 00:19:59.463 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.464 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.464 nvme0n1 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 nvme0n1 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.723 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.724 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.982 nvme0n1 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:59.982 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.983 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:19:59.983 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:19:59.983 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:59.983 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.241 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.242 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.500 nvme0n1 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.500 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.501 23:20:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.760 nvme0n1 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.760 nvme0n1 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.760 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.054 nvme0n1 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.054 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.055 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.313 nvme0n1 00:20:01.313 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.314 23:20:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.880 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.138 nvme0n1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.138 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.398 nvme0n1 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.398 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.399 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.658 nvme0n1 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.658 23:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.659 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.918 nvme0n1 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.918 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.919 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.177 nvme0n1 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.177 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:03.178 23:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.080 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 nvme0n1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.339 23:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.598 nvme0n1 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.598 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.882 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 nvme0n1 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.165 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 nvme0n1 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.424 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.683 23:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.942 nvme0n1 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:06.942 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.943 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.510 nvme0n1 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:07.510 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:07.511 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.769 23:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.335 nvme0n1 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.335 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.336 23:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 nvme0n1 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.902 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.467 nvme0n1 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.467 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.468 23:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.033 nvme0n1 00:20:10.033 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.033 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.033 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.033 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.033 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.033 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.291 nvme0n1 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.291 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.292 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.550 nvme0n1 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.550 23:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.550 nvme0n1 00:20:10.550 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.550 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:10.808 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.809 nvme0n1 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.809 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.080 nvme0n1 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.080 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.346 nvme0n1 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.346 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.347 nvme0n1 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.347 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 nvme0n1 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.609 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 nvme0n1 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.868 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.127 nvme0n1 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.127 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.385 nvme0n1 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.385 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.386 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.644 nvme0n1 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.644 23:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.903 nvme0n1 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:12.903 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.904 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.162 nvme0n1 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:13.162 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.163 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.422 nvme0n1 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.422 23:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.681 nvme0n1 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:13.681 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:13.682 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:13.682 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.682 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.682 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.250 nvme0n1 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.250 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 nvme0n1 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:14.509 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.510 23:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 nvme0n1 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.077 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.078 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.337 nvme0n1 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.337 23:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.904 nvme0n1 00:20:15.905 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.905 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.905 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:15.905 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.905 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.163 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.163 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.164 23:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.731 nvme0n1 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:16.731 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.732 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.304 nvme0n1 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:17.304 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.305 23:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.870 nvme0n1 00:20:17.870 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.870 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.871 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.130 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 nvme0n1 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.697 23:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 nvme0n1 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.697 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:18.955 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 nvme0n1 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.956 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 nvme0n1 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 nvme0n1 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.474 nvme0n1 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.474 23:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.733 nvme0n1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.733 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.991 nvme0n1 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:19.991 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.992 nvme0n1 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:19.992 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 nvme0n1 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.250 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.251 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 nvme0n1 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.509 23:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.510 23:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.510 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.510 23:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 nvme0n1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.768 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.031 nvme0n1 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.031 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.290 nvme0n1 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.290 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.548 nvme0n1 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.548 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.549 23:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.549 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.807 nvme0n1 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.807 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.808 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.375 nvme0n1 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:22.375 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.376 23:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.635 nvme0n1 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.635 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.203 nvme0n1 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.203 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 nvme0n1 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.462 23:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 nvme0n1 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjAwZjIzNGU4YzgyY2IxZDFlNjNkNzg3ZTM5MmZmNTbCn5fx: 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: ]] 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzVlMzRmYWYxM2VkOTYzNDcxZjhhNDU5ZmYyNGNmZDZjOWExMTZiZDZhMWE3NTU5ZDlmNDIzMTc3ZjFhYzZkY1YTZEQ=: 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.030 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.031 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.598 nvme0n1 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:24.598 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.599 23:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.166 nvme0n1 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmExYmQwZDRlMGJjNzRlNGE4ZTQ5ZTAyMzI1YWY2ZjFFC+Ck: 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NThlMGY3Nzc4Mzg2YTE4ZWE0YWIzNmE3OTRhYzgxYTbOyZsv: 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.166 23:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.733 nvme0n1 00:20:25.733 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.733 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.733 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:25.733 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.733 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.734 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.992 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.992 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.992 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.992 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.992 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.992 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzZmYjA5ZWIxNzE1MTI4NzE2YmYzZTIxYjg1YmY2ZDRkN2I0YzhlYTJiOWI3NmQyfFyxVg==: 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: ]] 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDJmMzgxNGYyZDcxOTQzMGViMmI4YWMxZThiNzkzMmUU3v34: 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.993 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.560 nvme0n1 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2JkNjg1ODM2ZTk2ZjNhNWRlN2ExMzJhOTAwODk3ODA0MWI3M2UwNGNiODAyZTA0NDVhZjU2OTYwNTIzMjcxZASDF4E=: 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.560 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.561 23:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.129 nvme0n1 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWQ5MDMwMzU2MzBmMjkxYjEwMzdlODgyMjJmYzE5ZjNmNzM1ZGUyYTk0MTQ5NGQ2V8DtKg==: 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjM0ZGQ4NjViZTgxN2FmMzljYTM5ZmM1OTViMWEyMmFlYjI5NzNhYTcwNmJkYWJiNZCHcw==: 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.129 request: 00:20:27.129 { 00:20:27.129 "name": "nvme0", 00:20:27.129 "trtype": "tcp", 00:20:27.129 "traddr": "10.0.0.1", 00:20:27.129 "adrfam": "ipv4", 00:20:27.129 "trsvcid": "4420", 00:20:27.129 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:27.129 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:27.129 "prchk_reftag": false, 00:20:27.129 "prchk_guard": false, 00:20:27.129 "hdgst": false, 00:20:27.129 "ddgst": false, 00:20:27.129 "method": "bdev_nvme_attach_controller", 00:20:27.129 "req_id": 1 00:20:27.129 } 00:20:27.129 Got JSON-RPC error response 00:20:27.129 response: 00:20:27.129 { 00:20:27.129 "code": -5, 00:20:27.129 "message": "Input/output error" 00:20:27.129 } 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:27.129 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.130 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.130 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.130 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.130 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:27.130 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.130 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.389 request: 00:20:27.389 { 00:20:27.389 "name": "nvme0", 00:20:27.389 "trtype": "tcp", 00:20:27.389 "traddr": "10.0.0.1", 00:20:27.389 "adrfam": "ipv4", 00:20:27.389 "trsvcid": "4420", 00:20:27.389 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:27.389 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:27.389 "prchk_reftag": false, 00:20:27.389 "prchk_guard": false, 00:20:27.389 "hdgst": false, 00:20:27.389 "ddgst": false, 00:20:27.389 "dhchap_key": "key2", 00:20:27.389 "method": "bdev_nvme_attach_controller", 00:20:27.389 "req_id": 1 00:20:27.389 } 00:20:27.389 Got JSON-RPC error response 00:20:27.389 response: 00:20:27.389 { 00:20:27.389 "code": -5, 00:20:27.389 "message": "Input/output error" 00:20:27.389 } 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.389 request: 00:20:27.389 { 00:20:27.389 "name": "nvme0", 00:20:27.389 "trtype": "tcp", 00:20:27.389 "traddr": "10.0.0.1", 00:20:27.389 "adrfam": "ipv4", 00:20:27.389 "trsvcid": "4420", 00:20:27.389 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:27.389 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:27.389 "prchk_reftag": false, 00:20:27.389 "prchk_guard": false, 00:20:27.389 "hdgst": false, 00:20:27.389 "ddgst": false, 00:20:27.389 "dhchap_key": "key1", 00:20:27.389 "dhchap_ctrlr_key": "ckey2", 00:20:27.389 "method": "bdev_nvme_attach_controller", 00:20:27.389 "req_id": 1 00:20:27.389 } 00:20:27.389 Got JSON-RPC error response 00:20:27.389 response: 00:20:27.389 { 00:20:27.389 "code": -5, 00:20:27.389 "message": "Input/output error" 00:20:27.389 } 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:27.389 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.390 rmmod nvme_tcp 00:20:27.390 rmmod nvme_fabrics 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78918 ']' 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78918 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78918 ']' 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78918 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78918 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78918' 00:20:27.390 killing process with pid 78918 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78918 00:20:27.390 23:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78918 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:27.956 23:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:28.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.524 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.783 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.783 23:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.02t /tmp/spdk.key-null.BMn /tmp/spdk.key-sha256.5gS /tmp/spdk.key-sha384.G99 /tmp/spdk.key-sha512.g8s /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:28.783 23:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:29.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.055 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:29.055 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:29.055 00:20:29.055 real 0m34.251s 00:20:29.055 user 0m31.074s 00:20:29.055 sys 0m3.832s 00:20:29.055 23:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.055 23:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.055 ************************************ 00:20:29.055 END TEST nvmf_auth_host 00:20:29.055 ************************************ 00:20:29.315 23:20:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:29.315 23:20:51 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:20:29.315 23:20:51 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:29.315 23:20:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:29.315 23:20:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.315 23:20:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:29.315 ************************************ 00:20:29.315 START TEST nvmf_digest 00:20:29.315 ************************************ 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:29.315 * Looking for test storage... 00:20:29.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.315 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:29.316 Cannot find device "nvmf_tgt_br" 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.316 Cannot find device "nvmf_tgt_br2" 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:29.316 Cannot find device "nvmf_tgt_br" 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:29.316 Cannot find device "nvmf_tgt_br2" 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.316 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:29.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:20:29.575 00:20:29.575 --- 10.0.0.2 ping statistics --- 00:20:29.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.575 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:29.575 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.575 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:29.575 00:20:29.575 --- 10.0.0.3 ping statistics --- 00:20:29.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.575 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:29.575 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:29.576 00:20:29.576 --- 10.0.0.1 ping statistics --- 00:20:29.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.576 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:29.576 ************************************ 00:20:29.576 START TEST nvmf_digest_clean 00:20:29.576 ************************************ 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80480 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80480 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80480 ']' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.576 23:20:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.576 [2024-07-24 23:20:52.053115] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:29.576 [2024-07-24 23:20:52.053243] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.835 [2024-07-24 23:20:52.196985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.093 [2024-07-24 23:20:52.333052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.093 [2024-07-24 23:20:52.333115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.093 [2024-07-24 23:20:52.333147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.093 [2024-07-24 23:20:52.333160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.093 [2024-07-24 23:20:52.333169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.093 [2024-07-24 23:20:52.333209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.661 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 [2024-07-24 23:20:53.183448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:30.920 null0 00:20:30.920 [2024-07-24 23:20:53.246793] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.920 [2024-07-24 23:20:53.270919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80512 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80512 /var/tmp/bperf.sock 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80512 ']' 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.920 23:20:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.920 [2024-07-24 23:20:53.348763] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:30.920 [2024-07-24 23:20:53.348852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80512 ] 00:20:31.178 [2024-07-24 23:20:53.490673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.178 [2024-07-24 23:20:53.622319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.112 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.112 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:32.112 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:32.112 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:32.112 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:32.370 [2024-07-24 23:20:54.629675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:32.370 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:32.370 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:32.627 nvme0n1 00:20:32.627 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:32.627 23:20:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:32.885 Running I/O for 2 seconds... 00:20:34.787 00:20:34.787 Latency(us) 00:20:34.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.787 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:34.787 nvme0n1 : 2.01 15641.76 61.10 0.00 0.00 8177.28 7268.54 25022.84 00:20:34.787 =================================================================================================================== 00:20:34.787 Total : 15641.76 61.10 0.00 0.00 8177.28 7268.54 25022.84 00:20:34.787 0 00:20:34.787 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:34.787 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:34.787 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:34.787 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:34.787 | select(.opcode=="crc32c") 00:20:34.787 | "\(.module_name) \(.executed)"' 00:20:34.787 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80512 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80512 ']' 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80512 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80512 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:35.046 killing process with pid 80512 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80512' 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80512 00:20:35.046 Received shutdown signal, test time was about 2.000000 seconds 00:20:35.046 00:20:35.046 Latency(us) 00:20:35.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.046 =================================================================================================================== 00:20:35.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.046 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80512 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80572 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80572 /var/tmp/bperf.sock 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80572 ']' 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.305 23:20:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:35.563 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:35.563 Zero copy mechanism will not be used. 00:20:35.563 [2024-07-24 23:20:57.824897] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:35.563 [2024-07-24 23:20:57.824971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80572 ] 00:20:35.563 [2024-07-24 23:20:57.958820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.822 [2024-07-24 23:20:58.089915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.391 23:20:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.391 23:20:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:36.391 23:20:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:36.391 23:20:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:36.391 23:20:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:36.649 [2024-07-24 23:20:59.102612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:36.908 23:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:36.908 23:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:37.167 nvme0n1 00:20:37.167 23:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:37.167 23:20:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:37.167 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:37.167 Zero copy mechanism will not be used. 00:20:37.167 Running I/O for 2 seconds... 00:20:39.069 00:20:39.069 Latency(us) 00:20:39.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.069 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:39.069 nvme0n1 : 2.00 7653.29 956.66 0.00 0.00 2087.05 1750.11 3530.01 00:20:39.069 =================================================================================================================== 00:20:39.069 Total : 7653.29 956.66 0.00 0.00 2087.05 1750.11 3530.01 00:20:39.069 0 00:20:39.328 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:39.328 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:39.328 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:39.328 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:39.328 | select(.opcode=="crc32c") 00:20:39.328 | "\(.module_name) \(.executed)"' 00:20:39.328 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80572 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80572 ']' 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80572 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80572 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80572' 00:20:39.588 killing process with pid 80572 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80572 00:20:39.588 Received shutdown signal, test time was about 2.000000 seconds 00:20:39.588 00:20:39.588 Latency(us) 00:20:39.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.588 =================================================================================================================== 00:20:39.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.588 23:21:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80572 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80634 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80634 /var/tmp/bperf.sock 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80634 ']' 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:39.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:39.847 23:21:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:39.847 [2024-07-24 23:21:02.313803] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:39.847 [2024-07-24 23:21:02.313917] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80634 ] 00:20:40.106 [2024-07-24 23:21:02.449085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.106 [2024-07-24 23:21:02.588147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.046 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.046 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:41.046 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:41.046 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:41.046 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:41.306 [2024-07-24 23:21:03.539633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:41.306 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.306 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.564 nvme0n1 00:20:41.564 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:41.564 23:21:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:41.822 Running I/O for 2 seconds... 00:20:43.725 00:20:43.725 Latency(us) 00:20:43.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.725 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:43.725 nvme0n1 : 2.01 17098.70 66.79 0.00 0.00 7479.62 5332.25 15847.80 00:20:43.725 =================================================================================================================== 00:20:43.725 Total : 17098.70 66.79 0.00 0.00 7479.62 5332.25 15847.80 00:20:43.725 0 00:20:43.725 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:43.725 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:43.725 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:43.725 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:43.725 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:43.725 | select(.opcode=="crc32c") 00:20:43.725 | "\(.module_name) \(.executed)"' 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80634 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80634 ']' 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80634 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80634 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80634' 00:20:43.984 killing process with pid 80634 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80634 00:20:43.984 Received shutdown signal, test time was about 2.000000 seconds 00:20:43.984 00:20:43.984 Latency(us) 00:20:43.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.984 =================================================================================================================== 00:20:43.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.984 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80634 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80694 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80694 /var/tmp/bperf.sock 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80694 ']' 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:44.552 23:21:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:44.552 [2024-07-24 23:21:06.799107] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:44.552 [2024-07-24 23:21:06.799248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80694 ] 00:20:44.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:44.552 Zero copy mechanism will not be used. 00:20:44.552 [2024-07-24 23:21:06.931533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.810 [2024-07-24 23:21:07.083584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.377 23:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:45.377 23:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:20:45.377 23:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:45.377 23:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:45.377 23:21:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:45.634 [2024-07-24 23:21:08.065800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:45.892 23:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:45.892 23:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:46.150 nvme0n1 00:20:46.150 23:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:46.150 23:21:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:46.150 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.150 Zero copy mechanism will not be used. 00:20:46.150 Running I/O for 2 seconds... 00:20:48.681 00:20:48.681 Latency(us) 00:20:48.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.681 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:48.681 nvme0n1 : 2.00 6525.74 815.72 0.00 0.00 2446.24 1794.79 12749.73 00:20:48.681 =================================================================================================================== 00:20:48.681 Total : 6525.74 815.72 0.00 0.00 2446.24 1794.79 12749.73 00:20:48.681 0 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:48.681 | select(.opcode=="crc32c") 00:20:48.681 | "\(.module_name) \(.executed)"' 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80694 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80694 ']' 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80694 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80694 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:48.681 killing process with pid 80694 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80694' 00:20:48.681 Received shutdown signal, test time was about 2.000000 seconds 00:20:48.681 00:20:48.681 Latency(us) 00:20:48.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.681 =================================================================================================================== 00:20:48.681 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80694 00:20:48.681 23:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80694 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80480 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80480 ']' 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80480 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80480 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80480' 00:20:48.941 killing process with pid 80480 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80480 00:20:48.941 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80480 00:20:49.199 00:20:49.199 real 0m19.580s 00:20:49.199 user 0m37.583s 00:20:49.199 sys 0m5.115s 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:49.199 ************************************ 00:20:49.199 END TEST nvmf_digest_clean 00:20:49.199 ************************************ 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:49.199 ************************************ 00:20:49.199 START TEST nvmf_digest_error 00:20:49.199 ************************************ 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80783 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80783 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80783 ']' 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.199 23:21:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:49.458 [2024-07-24 23:21:11.690620] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:49.458 [2024-07-24 23:21:11.690727] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.458 [2024-07-24 23:21:11.825666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.716 [2024-07-24 23:21:11.965447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.716 [2024-07-24 23:21:11.965520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.716 [2024-07-24 23:21:11.965531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.716 [2024-07-24 23:21:11.965538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.716 [2024-07-24 23:21:11.965544] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.716 [2024-07-24 23:21:11.965571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.283 [2024-07-24 23:21:12.678078] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.283 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.283 [2024-07-24 23:21:12.762555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:50.542 null0 00:20:50.542 [2024-07-24 23:21:12.827748] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.542 [2024-07-24 23:21:12.851950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80815 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80815 /var/tmp/bperf.sock 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80815 ']' 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.542 23:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.542 [2024-07-24 23:21:12.920857] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:50.542 [2024-07-24 23:21:12.920980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80815 ] 00:20:50.801 [2024-07-24 23:21:13.062176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.801 [2024-07-24 23:21:13.231861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.058 [2024-07-24 23:21:13.311112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:51.624 23:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.624 23:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:51.624 23:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.624 23:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:51.882 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:51.882 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.882 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:51.882 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.882 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:51.882 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:52.141 nvme0n1 00:20:52.141 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:52.141 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.141 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:52.141 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.141 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:52.141 23:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:52.141 Running I/O for 2 seconds... 00:20:52.141 [2024-07-24 23:21:14.590531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.141 [2024-07-24 23:21:14.590593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.141 [2024-07-24 23:21:14.590608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.141 [2024-07-24 23:21:14.605888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.141 [2024-07-24 23:21:14.605936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.141 [2024-07-24 23:21:14.605949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.141 [2024-07-24 23:21:14.621054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.141 [2024-07-24 23:21:14.621103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.141 [2024-07-24 23:21:14.621117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.638531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.638579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.638592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.655822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.655906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.655944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.672747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.672797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.672809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.688815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.688862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.688875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.704781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.704830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.704843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.721200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.721246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.721258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.736607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.736653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.736665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.751742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.751788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.751815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.766331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.766377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.766388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.781354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.781398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.781409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.796609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.796653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.796664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.811064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.811110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.811121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.825449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.825494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.825505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.842373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.842420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.842432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.859664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.859711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.859723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.400 [2024-07-24 23:21:14.876670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.400 [2024-07-24 23:21:14.876740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.400 [2024-07-24 23:21:14.876754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.893709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.893765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.893779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.909921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.909986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.910000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.926256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.926317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.942187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.942249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.942263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.958958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.959049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.959064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.974890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.974970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.974985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:14.990870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:14.990952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:14.990966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.006947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:15.007011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:15.007024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.023179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:15.023266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:15.023280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.039096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:15.039172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:15.039186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.054495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:15.054541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:15.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.070118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:15.070172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:15.070184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.085840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.660 [2024-07-24 23:21:15.085885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.660 [2024-07-24 23:21:15.085896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.660 [2024-07-24 23:21:15.102201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.661 [2024-07-24 23:21:15.102245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.661 [2024-07-24 23:21:15.102257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.661 [2024-07-24 23:21:15.118444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.661 [2024-07-24 23:21:15.118489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.661 [2024-07-24 23:21:15.118500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.661 [2024-07-24 23:21:15.133998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.661 [2024-07-24 23:21:15.134087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.661 [2024-07-24 23:21:15.134100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.150503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.150589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.150603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.166055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.166140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.166164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.181381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.181460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.181474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.196803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.196895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.196910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.213534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.213623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.213637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.229033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.229118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.229132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.919 [2024-07-24 23:21:15.244554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.919 [2024-07-24 23:21:15.244620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.919 [2024-07-24 23:21:15.244633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.259949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.260016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.260029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.275151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.275207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.275220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.290265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.290336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.290349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.307636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.307728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.307742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.325562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.325616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.325629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.343150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.343223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.343239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.359023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.359072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.359085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.374264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.374311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 23:21:15.389996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:52.920 [2024-07-24 23:21:15.390059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.920 [2024-07-24 23:21:15.390071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.407476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.407511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.407524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.424489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.424544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.424557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.440738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.440786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.440798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.456294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.456358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.456385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.473312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.473361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.473373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.488818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.488864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.488876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.504645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.504692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.504703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.520031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.520062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.520074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.534754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.534799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.534826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.550552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.550599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.550611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.567645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.567691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.179 [2024-07-24 23:21:15.567703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.179 [2024-07-24 23:21:15.583692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.179 [2024-07-24 23:21:15.583738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.180 [2024-07-24 23:21:15.583749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.180 [2024-07-24 23:21:15.605482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.180 [2024-07-24 23:21:15.605531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.180 [2024-07-24 23:21:15.605543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.180 [2024-07-24 23:21:15.621679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.180 [2024-07-24 23:21:15.621726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.180 [2024-07-24 23:21:15.621738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.180 [2024-07-24 23:21:15.637753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.180 [2024-07-24 23:21:15.637822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.180 [2024-07-24 23:21:15.637836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.180 [2024-07-24 23:21:15.654247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.180 [2024-07-24 23:21:15.654311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.180 [2024-07-24 23:21:15.654324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.671105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.671193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.671208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.687105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.687221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.687236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.702683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.702736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.702749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.719775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.719877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.719893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.736038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.736104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.736120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.752012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.752076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.752091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.768074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.768126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.768152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.784045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.784112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.784138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.800992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.801070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.801085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.816913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.816960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.816972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.833065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.833113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.833125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.848975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.849023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.849051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.864980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.865026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.451 [2024-07-24 23:21:15.865054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.451 [2024-07-24 23:21:15.881731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.451 [2024-07-24 23:21:15.881779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.452 [2024-07-24 23:21:15.881791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.452 [2024-07-24 23:21:15.897759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.452 [2024-07-24 23:21:15.897851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.452 [2024-07-24 23:21:15.897865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.452 [2024-07-24 23:21:15.914242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.452 [2024-07-24 23:21:15.914329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.452 [2024-07-24 23:21:15.914344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:15.931947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:15.932030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:15.932048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:15.950042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:15.950140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:15.950172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:15.966190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:15.966280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:15.966296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:15.982885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:15.982985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:15.983001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:15.999558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:15.999649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:15.999664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.015884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.015976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.015991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.031417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.031472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.031485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.048062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.048101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.048114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.065411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.065503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.065518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.082080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.082124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.082171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.098068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.098139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.098173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.114421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.114486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.114500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.130431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.130480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.130507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.146512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.146560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.146572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.163409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.163456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.163488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.181602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.181638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.181650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.198776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.198832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.198845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.738 [2024-07-24 23:21:16.214727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.738 [2024-07-24 23:21:16.214778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.738 [2024-07-24 23:21:16.214805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.997 [2024-07-24 23:21:16.231861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.997 [2024-07-24 23:21:16.231910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.997 [2024-07-24 23:21:16.231948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.997 [2024-07-24 23:21:16.247362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.997 [2024-07-24 23:21:16.247409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.997 [2024-07-24 23:21:16.247420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.262590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.262638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.262650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.279826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.279873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.279885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.294721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.294768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.294779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.309461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.309507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.309518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.326165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.326224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.326237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.344398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.344445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.344492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.361218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.361261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.361275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.376918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.376964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.376976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.392040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.392087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.392099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.407841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.407895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.407907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.422899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.422951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.422963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.438439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.438509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.438522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.453718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.453774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.453786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:53.998 [2024-07-24 23:21:16.469263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:53.998 [2024-07-24 23:21:16.469322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.998 [2024-07-24 23:21:16.469336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 [2024-07-24 23:21:16.485959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:54.258 [2024-07-24 23:21:16.486043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.258 [2024-07-24 23:21:16.486057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 [2024-07-24 23:21:16.501448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:54.258 [2024-07-24 23:21:16.501527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.258 [2024-07-24 23:21:16.501541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 [2024-07-24 23:21:16.516869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:54.258 [2024-07-24 23:21:16.516938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.258 [2024-07-24 23:21:16.516951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 [2024-07-24 23:21:16.533342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:54.258 [2024-07-24 23:21:16.533417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.258 [2024-07-24 23:21:16.533431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 [2024-07-24 23:21:16.548850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:54.258 [2024-07-24 23:21:16.548913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.258 [2024-07-24 23:21:16.548927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 [2024-07-24 23:21:16.564219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2153730) 00:20:54.258 [2024-07-24 23:21:16.564267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.258 [2024-07-24 23:21:16.564298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:54.258 00:20:54.258 Latency(us) 00:20:54.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.258 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:54.258 nvme0n1 : 2.01 15704.64 61.35 0.00 0.00 8144.75 7030.23 30265.72 00:20:54.258 =================================================================================================================== 00:20:54.258 Total : 15704.64 61.35 0.00 0.00 8144.75 7030.23 30265.72 00:20:54.258 0 00:20:54.258 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:54.258 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:54.258 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:54.258 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:54.258 | .driver_specific 00:20:54.258 | .nvme_error 00:20:54.258 | .status_code 00:20:54.258 | .command_transient_transport_error' 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80815 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80815 ']' 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80815 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80815 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:54.517 killing process with pid 80815 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80815' 00:20:54.517 Received shutdown signal, test time was about 2.000000 seconds 00:20:54.517 00:20:54.517 Latency(us) 00:20:54.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.517 =================================================================================================================== 00:20:54.517 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80815 00:20:54.517 23:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80815 00:20:54.776 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:54.776 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:54.776 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:54.776 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:54.776 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80870 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80870 /var/tmp/bperf.sock 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80870 ']' 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.777 23:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:55.035 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:55.035 Zero copy mechanism will not be used. 00:20:55.035 [2024-07-24 23:21:17.270743] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:55.035 [2024-07-24 23:21:17.270832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80870 ] 00:20:55.035 [2024-07-24 23:21:17.405735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.294 [2024-07-24 23:21:17.544097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.294 [2024-07-24 23:21:17.620054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:55.862 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.862 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:20:55.862 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:55.862 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:56.121 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:56.121 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.121 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:56.121 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.121 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:56.121 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:56.380 nvme0n1 00:20:56.380 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:56.380 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.380 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:56.380 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.380 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:56.380 23:21:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:56.640 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:56.640 Zero copy mechanism will not be used. 00:20:56.640 Running I/O for 2 seconds... 00:20:56.640 [2024-07-24 23:21:18.968471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.968553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.968568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.972682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.972730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.972742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.976934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.976981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.976994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.981094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.981152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.981165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.985697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.985745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.985757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.989899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.989948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.989960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.994551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.994586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.994600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:18.999148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:18.999208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:18.999221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:19.003598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:19.003646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:19.003657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:19.007718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:19.007766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:19.007777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:19.012037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:19.012072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:19.012085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:19.016377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:19.016408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:19.016420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:19.020601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:19.020648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.640 [2024-07-24 23:21:19.020661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.640 [2024-07-24 23:21:19.024947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.640 [2024-07-24 23:21:19.024995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.025038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.029499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.029548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.029560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.034480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.034550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.034563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.038839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.038887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.038898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.043047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.043095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.043107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.047372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.047419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.047430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.051337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.051383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.051395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.055306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.055352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.059266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.059328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.059339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.063258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.063304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.063315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.067214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.067260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.067271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.071096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.071154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.071167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.075296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.075356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.079802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.079849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.079861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.084200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.084234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.084260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.088134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.088195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.088208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.092092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.092141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.092163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.095997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.096046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.096058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.100122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.100179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.100192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.104134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.104192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.104205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.108176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.108223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.108235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.112114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.112159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.112171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.116032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.116065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.116078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.641 [2024-07-24 23:21:19.120219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.641 [2024-07-24 23:21:19.120253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.641 [2024-07-24 23:21:19.120280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.124647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.124709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.124721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.129097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.129155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.129183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.133358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.133405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.133417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.137559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.137633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.137646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.141821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.141869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.141881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.145994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.146042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.146053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.150088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.150136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.150159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.154381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.154428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.154439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.158571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.158623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.158635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.162892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.162941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.162952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.167304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.167353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.167365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.171888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.171945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.171975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.176061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.176096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.176109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.180201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.180235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.180248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.184275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.184336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.184348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.188422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.188499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.188511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.192599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.192649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.192660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.903 [2024-07-24 23:21:19.196835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.903 [2024-07-24 23:21:19.196884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.903 [2024-07-24 23:21:19.196895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.201165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.201224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.201236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.205489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.205536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.205547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.209638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.209703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.213759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.213807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.213818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.217890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.217950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.221993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.222040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.222052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.226125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.226184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.226196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.230471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.230518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.230530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.234701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.234748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.234760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.238866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.238915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.238927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.243076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.243126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.243138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.247280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.247329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.247341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.251444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.251495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.251507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.256062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.256098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.256111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.260839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.260876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.260889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.265136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.265196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.265209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.269513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.269571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.269584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.273820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.273868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.273879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.278053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.278102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.278114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.282121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.282178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.282190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.286284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.286331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.286342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.290413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.290461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.290473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.294534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.294581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.904 [2024-07-24 23:21:19.294593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.904 [2024-07-24 23:21:19.298543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.904 [2024-07-24 23:21:19.298592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.298603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.302594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.302642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.302655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.306781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.306830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.306842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.310901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.310949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.310960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.314978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.315026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.315037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.319036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.319084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.319096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.323185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.323232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.323244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.327242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.327289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.327300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.331286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.331333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.331345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.335275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.335333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.339379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.339427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.339439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.343389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.343436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.343447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.347560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.347608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.347620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.351680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.351727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.351739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.355860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.355908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.355946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.360120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.360167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.360180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.364208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.364257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.364285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.368434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.368481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.368492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.372656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.372705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.372716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.376908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.376956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.376967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.905 [2024-07-24 23:21:19.381468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:56.905 [2024-07-24 23:21:19.381502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.905 [2024-07-24 23:21:19.381515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.386318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.386353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.386375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.390936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.390986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.390999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.395403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.395438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.395450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.400070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.400105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.400118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.404470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.404532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.404544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.409370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.409407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.409419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.413952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.414010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.414038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.418575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.418624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.418636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.423021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.423073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.423086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.427688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.427723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.427735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.432469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.432535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.432547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.436754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.436802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.436814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.441077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.441126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.441138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.445536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.445571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.445584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.450109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.450186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.450200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.454699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.454760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.458921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.458973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.458985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.463136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.463195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.463207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.467338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.467387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.467398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.471370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.471420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.471432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.475673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.475722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.475734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.479766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.479815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.479826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.483954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.167 [2024-07-24 23:21:19.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.167 [2024-07-24 23:21:19.484003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.167 [2024-07-24 23:21:19.488167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.488202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.488215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.492319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.492369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.492381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.496419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.496482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.496508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.500632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.500681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.500693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.504867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.504923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.504934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.509026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.509074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.509086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.513441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.513510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.513522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.518161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.518219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.518232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.522452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.522503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.526678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.526725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.526737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.530739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.530787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.530799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.534775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.534823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.534835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.538873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.538925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.538937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.543034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.543083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.543094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.547050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.547099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.547111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.551136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.551194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.551206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.555337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.555389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.555401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.559413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.559461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.559473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.563400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.563447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.563459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.567362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.567409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.567424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.571271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.571319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.571331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.575302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.575351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.575362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.579362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.579412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.579425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.583396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.583445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.583457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.587356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.587403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.587414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.591547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.168 [2024-07-24 23:21:19.591596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.168 [2024-07-24 23:21:19.591608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.168 [2024-07-24 23:21:19.595633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.595682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.595694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.599796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.599844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.599855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.604015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.604063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.604076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.608071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.608124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.608136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.612305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.612370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.612383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.616401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.616464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.616477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.620644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.620703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.624824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.624872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.624883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.629009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.629056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.629068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.633179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.633239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.633251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.637352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.637398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.637410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.641500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.641547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.641559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.169 [2024-07-24 23:21:19.645971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.169 [2024-07-24 23:21:19.646020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.169 [2024-07-24 23:21:19.646049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.650682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.650731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.654941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.655005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.655018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.659432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.659486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.659499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.663608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.663656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.663670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.667828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.667876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.667888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.672165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.672199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.672212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.676907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.676956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.676968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.681395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.681442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.681468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.686107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.686166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.686178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.690741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.690789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.690800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.694907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.694955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.694966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.698973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.699021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.699032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.703051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.703098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.703110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.707520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.707583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.707596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.712053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.712089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.712102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.716696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.716745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.721196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.721256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.721269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.725723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.430 [2024-07-24 23:21:19.725772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.430 [2024-07-24 23:21:19.725784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.430 [2024-07-24 23:21:19.730306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.730338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.730352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.734762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.734810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.734821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.738981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.739030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.739042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.743233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.743280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.743292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.747379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.747427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.747439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.751537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.751584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.751596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.755874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.755930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.755959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.759983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.760018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.760030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.764193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.764242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.764270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.768215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.768264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.768276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.772625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.772674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.772687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.777174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.777233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.777245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.781578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.781626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.781638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.785797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.785844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.785856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.789899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.789947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.794081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.794128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.794140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.798200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.798247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.802243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.802290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.802302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.806436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.806485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.806497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.810535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.810583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.810595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.814624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.814672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.814683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.818716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.818776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.822817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.822865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.822876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.826939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.826987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.826999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.831313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.831359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.831372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.835370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.431 [2024-07-24 23:21:19.835418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.431 [2024-07-24 23:21:19.835429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.431 [2024-07-24 23:21:19.839444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.839491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.839503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.843512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.843560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.843571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.847735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.847779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.852271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.852305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.852319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.856640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.856689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.856700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.861020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.861070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.861082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.865424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.865487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.865501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.870058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.870095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.870107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.874560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.874610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.874622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.879058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.879093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.879106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.883499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.883547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.883575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.887804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.887854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.887866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.892343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.892427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.892440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.896908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.896958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.896970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.901588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.901637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.901665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.905869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.905917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.905929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.432 [2024-07-24 23:21:19.910333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.432 [2024-07-24 23:21:19.910366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.432 [2024-07-24 23:21:19.910378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.914736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.914785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.914797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.919269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.919319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.919331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.923802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.923851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.923863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.928049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.928084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.928097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.932182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.932216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.932229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.936310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.936357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.936369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.940516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.940563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.940575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.945117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.945175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.945187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.949624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.949673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.949685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.953773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.953822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.953834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.958058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.958124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.958151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.962363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.693 [2024-07-24 23:21:19.962411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.693 [2024-07-24 23:21:19.962423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.693 [2024-07-24 23:21:19.966606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.966654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.966666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.971308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.971343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.971355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.975804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.975852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.980164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.980198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.980211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.984363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.984423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.984435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.988544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.988590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.988602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.992757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.992805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.992817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:19.997005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:19.997053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:19.997064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.001248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.001295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.001307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.005400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.005447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.005459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.009619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.009668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.009679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.013890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.013938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.013950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.018056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.018116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.022327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.022374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.022386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.026575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.026633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.026645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.031101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.031161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.031174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.035540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.035603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.035615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.040045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.040080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.040092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.044113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.044159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.044173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.048238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.048301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.048313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.052430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.052477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.052489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.056680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.056728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.056739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.060925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.060973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.060984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.065121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.065178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.065190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.069367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.069414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.069425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.073534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.073608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.694 [2024-07-24 23:21:20.073621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.694 [2024-07-24 23:21:20.077882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.694 [2024-07-24 23:21:20.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.077945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.082131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.082191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.082220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.086336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.086384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.090543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.090592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.090603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.094855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.094905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.094917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.099071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.099121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.099133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.103250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.103298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.103309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.107542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.107605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.107616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.112069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.112105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.112117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.116557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.116605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.116617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.120836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.120882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.120894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.125143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.125212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.129391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.129439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.129450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.133519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.133579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.137784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.137831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.137843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.141958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.142018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.146315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.146363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.146374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.150443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.150480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.150507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.154665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.154713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.154726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.159364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.159398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.159411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.163860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.163911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.163934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.168508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.168556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.168568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.695 [2024-07-24 23:21:20.173160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.695 [2024-07-24 23:21:20.173204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.695 [2024-07-24 23:21:20.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.956 [2024-07-24 23:21:20.177821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.956 [2024-07-24 23:21:20.177871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.956 [2024-07-24 23:21:20.177884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.956 [2024-07-24 23:21:20.182518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.956 [2024-07-24 23:21:20.182567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.956 [2024-07-24 23:21:20.182594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.956 [2024-07-24 23:21:20.186963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.956 [2024-07-24 23:21:20.187011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.956 [2024-07-24 23:21:20.187040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.956 [2024-07-24 23:21:20.191560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.956 [2024-07-24 23:21:20.191607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.191619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.195870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.195944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.195974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.200186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.200219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.200232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.204397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.204444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.204470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.208594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.208642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.208653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.212917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.212965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.212976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.217481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.217529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.217540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.221699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.221752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.221764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.226003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.226052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.226064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.230315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.230362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.230374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.234511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.234561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.234573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.238725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.238773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.238785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.242892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.242941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.242953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.247651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.247716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.247729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.252141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.252188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.252201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.256814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.256848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.256861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.261424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.261470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.265929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.265976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.265987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.270197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.270244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.270255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.274396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.274443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.274454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.279108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.279166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.279194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.283642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.283692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.283704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.287724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.287770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.287781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.292499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.292557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.292568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.297043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.297106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.297117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.301159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.301215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.301227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.957 [2024-07-24 23:21:20.305183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.957 [2024-07-24 23:21:20.305238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.957 [2024-07-24 23:21:20.305249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.309254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.309300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.309310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.313298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.313343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.313354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.317283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.317328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.317340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.321371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.321418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.321429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.325481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.325529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.325539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.329639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.329686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.329697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.333772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.333819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.333830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.337879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.337926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.337937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.341976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.342023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.342034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.346077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.346124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.346135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.350030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.350077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.350088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.354112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.354169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.354181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.358147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.358192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.358203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.362151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.362197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.362208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.366234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.366281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.366293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.370328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.370375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.374439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.374486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.374498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.378558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.378605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.378617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.382692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.382740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.382751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.386838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.386884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.386896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.391240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.391288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.391300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.395310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.395358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.395370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.399489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.399537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.399564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.403812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.403859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.403887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.408206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.408240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.412811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.412861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.958 [2024-07-24 23:21:20.412888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.958 [2024-07-24 23:21:20.417377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.958 [2024-07-24 23:21:20.417411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.959 [2024-07-24 23:21:20.417436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:57.959 [2024-07-24 23:21:20.421866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.959 [2024-07-24 23:21:20.421915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.959 [2024-07-24 23:21:20.421928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:57.959 [2024-07-24 23:21:20.426325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.959 [2024-07-24 23:21:20.426360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.959 [2024-07-24 23:21:20.426373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:57.959 [2024-07-24 23:21:20.431125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.959 [2024-07-24 23:21:20.431172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.959 [2024-07-24 23:21:20.431186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:57.959 [2024-07-24 23:21:20.435575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:57.959 [2024-07-24 23:21:20.435624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:57.959 [2024-07-24 23:21:20.435651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.440475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.440536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.440563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.445289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.445337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.445350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.449949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.449997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.450026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.454795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.454842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.454854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.459045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.459092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.459104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.463205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.463252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.463264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.467311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.467359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.467370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.471324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.471371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.471383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.475282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.475328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.475339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.479227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.479273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.479284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.483541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.483589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.483600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.487652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.487700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.487711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.491899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.491957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.491970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.496202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.496257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.496283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.500311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.500358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.500370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.504506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.504553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.504565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.508728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.508775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.508787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.513396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.513444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.222 [2024-07-24 23:21:20.513455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.222 [2024-07-24 23:21:20.517932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.222 [2024-07-24 23:21:20.517980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.517992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.522237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.522285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.522296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.526307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.526367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.530479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.530526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.530537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.534649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.534709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.538889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.538937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.538949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.543101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.543158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.543171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.547180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.547227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.547239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.551720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.551769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.551780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.556418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.556482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.556495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.560560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.560607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.560619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.564716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.564776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.564788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.568973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.569023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.569034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.573178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.573236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.573248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.577272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.577319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.577330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.581347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.581395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.581406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.585433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.585482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.585493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.589558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.589605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.589616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.593833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.593884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.593896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.598351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.598400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.598426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.602615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.602663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.602674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.606825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.606884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.223 [2024-07-24 23:21:20.611229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.223 [2024-07-24 23:21:20.611276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.223 [2024-07-24 23:21:20.611288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.615382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.615428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.615440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.619528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.619576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.619588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.623595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.623643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.623654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.627855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.627904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.627923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.632236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.632272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.632284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.636484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.636530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.636541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.640628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.640688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.644988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.645037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.645049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.649560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.649595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.649608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.654246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.654294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.658486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.658533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.658545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.662794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.662842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.662853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.666950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.666999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.667010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.671065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.671113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.671125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.675103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.675164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.675177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.679301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.679361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.679372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.683323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.683370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.683381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.687412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.687460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.687472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.691501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.691549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.691561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.695732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.695780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.695792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.224 [2024-07-24 23:21:20.700394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.224 [2024-07-24 23:21:20.700444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.224 [2024-07-24 23:21:20.700456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.504 [2024-07-24 23:21:20.704954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.504 [2024-07-24 23:21:20.704993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.504 [2024-07-24 23:21:20.705007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.504 [2024-07-24 23:21:20.709371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.504 [2024-07-24 23:21:20.709407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.504 [2024-07-24 23:21:20.709421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.504 [2024-07-24 23:21:20.713704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.504 [2024-07-24 23:21:20.713742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.504 [2024-07-24 23:21:20.713755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.504 [2024-07-24 23:21:20.718108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.504 [2024-07-24 23:21:20.718179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.504 [2024-07-24 23:21:20.718193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.504 [2024-07-24 23:21:20.722608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.504 [2024-07-24 23:21:20.722661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.722673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.726987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.727037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.727049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.731412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.731466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.731482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.735655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.735704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.735716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.739827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.739879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.739891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.744071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.744114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.744140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.748177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.748226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.748243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.752440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.752472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.752500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.756593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.756642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.756654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.760675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.760723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.760735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.764922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.764970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.764982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.769182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.769226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.769238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.773639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.773689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.773702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.778111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.778174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.778187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.782397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.782431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.786752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.786801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.786812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.791024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.791072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.791089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.795291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.795355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.795367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.799406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.799454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.799466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.803662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.803697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.803709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.808046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.808082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.808095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.812270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.812305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.812318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.816624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.816674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.816687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.821134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.821206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.821219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.825315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.825363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.825375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.829533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.829595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.829607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.505 [2024-07-24 23:21:20.833814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.505 [2024-07-24 23:21:20.833871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.505 [2024-07-24 23:21:20.833883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.838467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.838515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.838527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.842668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.842717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.842728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.846875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.846923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.846935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.851127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.851197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.851209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.855263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.855311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.855322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.859372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.859420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.859431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.863460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.863508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.863519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.867595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.867642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.867654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.872109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.872154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.872167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.876717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.876769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.876782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.881247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.881300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.881312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.885620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.885677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.885688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.889877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.889925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.889937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.894026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.894073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.894085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.898325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.898372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.898384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.902498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.902547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.902573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.907000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.907081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.907093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.911595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.911643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.911655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.915967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.916001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.916013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.920654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.920702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.920714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.925159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.925219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.925231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.929686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.929735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.934233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.934282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.934295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.938999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.939050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.939062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.943481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.943517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.943530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:58.506 [2024-07-24 23:21:20.948250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.506 [2024-07-24 23:21:20.948284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.506 [2024-07-24 23:21:20.948296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:58.507 [2024-07-24 23:21:20.952726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.507 [2024-07-24 23:21:20.952776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.507 [2024-07-24 23:21:20.952790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.507 [2024-07-24 23:21:20.957203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1accfb0) 00:20:58.507 [2024-07-24 23:21:20.957237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.507 [2024-07-24 23:21:20.957249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:58.507 00:20:58.507 Latency(us) 00:20:58.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.507 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:58.507 nvme0n1 : 2.00 7201.41 900.18 0.00 0.00 2218.14 1742.66 10783.65 00:20:58.507 =================================================================================================================== 00:20:58.507 Total : 7201.41 900.18 0.00 0.00 2218.14 1742.66 10783.65 00:20:58.507 0 00:20:58.507 23:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:58.507 23:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:58.507 23:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:58.507 23:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:58.507 | .driver_specific 00:20:58.507 | .nvme_error 00:20:58.507 | .status_code 00:20:58.507 | .command_transient_transport_error' 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 465 > 0 )) 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80870 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80870 ']' 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80870 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80870 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.074 killing process with pid 80870 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80870' 00:20:59.074 Received shutdown signal, test time was about 2.000000 seconds 00:20:59.074 00:20:59.074 Latency(us) 00:20:59.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.074 =================================================================================================================== 00:20:59.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80870 00:20:59.074 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80870 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80936 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80936 /var/tmp/bperf.sock 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80936 ']' 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:59.333 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:59.334 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.334 23:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:59.334 [2024-07-24 23:21:21.700208] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:20:59.334 [2024-07-24 23:21:21.700312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80936 ] 00:20:59.593 [2024-07-24 23:21:21.839873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.593 [2024-07-24 23:21:21.951090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.593 [2024-07-24 23:21:22.026698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.529 23:21:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:00.788 nvme0n1 00:21:00.788 23:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:00.788 23:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.788 23:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:00.788 23:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.788 23:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:00.788 23:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.050 Running I/O for 2 seconds... 00:21:01.050 [2024-07-24 23:21:23.359590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fef90 00:21:01.050 [2024-07-24 23:21:23.362134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.362197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.050 [2024-07-24 23:21:23.375390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190feb58 00:21:01.050 [2024-07-24 23:21:23.378093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.378142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.050 [2024-07-24 23:21:23.392320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fe2e8 00:21:01.050 [2024-07-24 23:21:23.394859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.394930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.050 [2024-07-24 23:21:23.407900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fda78 00:21:01.050 [2024-07-24 23:21:23.410451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.410484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.050 [2024-07-24 23:21:23.423752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fd208 00:21:01.050 [2024-07-24 23:21:23.426308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.426353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.050 [2024-07-24 23:21:23.438822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fc998 00:21:01.050 [2024-07-24 23:21:23.441183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.441252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.050 [2024-07-24 23:21:23.454078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fc128 00:21:01.050 [2024-07-24 23:21:23.456418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.050 [2024-07-24 23:21:23.456462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.051 [2024-07-24 23:21:23.468721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fb8b8 00:21:01.051 [2024-07-24 23:21:23.470966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.051 [2024-07-24 23:21:23.471010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.051 [2024-07-24 23:21:23.484988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fb048 00:21:01.051 [2024-07-24 23:21:23.487391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.051 [2024-07-24 23:21:23.487428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.051 [2024-07-24 23:21:23.501636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fa7d8 00:21:01.051 [2024-07-24 23:21:23.504005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.051 [2024-07-24 23:21:23.504041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.051 [2024-07-24 23:21:23.517577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f9f68 00:21:01.051 [2024-07-24 23:21:23.519847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.051 [2024-07-24 23:21:23.519895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.051 [2024-07-24 23:21:23.533577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f96f8 00:21:01.310 [2024-07-24 23:21:23.535847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.535881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.548883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f8e88 00:21:01.310 [2024-07-24 23:21:23.551243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.551277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.564644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f8618 00:21:01.310 [2024-07-24 23:21:23.566841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.566877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.579112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f7da8 00:21:01.310 [2024-07-24 23:21:23.581348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.581383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.593900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f7538 00:21:01.310 [2024-07-24 23:21:23.596210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.596250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.608590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f6cc8 00:21:01.310 [2024-07-24 23:21:23.610699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.610733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.623677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f6458 00:21:01.310 [2024-07-24 23:21:23.625874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.625910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.638513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f5be8 00:21:01.310 [2024-07-24 23:21:23.640694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.640731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.654061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f5378 00:21:01.310 [2024-07-24 23:21:23.656353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.656393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.668969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f4b08 00:21:01.310 [2024-07-24 23:21:23.671183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.671246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.684462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f4298 00:21:01.310 [2024-07-24 23:21:23.686752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.686788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.699488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f3a28 00:21:01.310 [2024-07-24 23:21:23.701542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.701593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.714165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f31b8 00:21:01.310 [2024-07-24 23:21:23.716233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.716284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.728788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f2948 00:21:01.310 [2024-07-24 23:21:23.730758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.310 [2024-07-24 23:21:23.730790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.310 [2024-07-24 23:21:23.743403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f20d8 00:21:01.311 [2024-07-24 23:21:23.745423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.311 [2024-07-24 23:21:23.745466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.311 [2024-07-24 23:21:23.758640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f1868 00:21:01.311 [2024-07-24 23:21:23.760895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.311 [2024-07-24 23:21:23.760943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.311 [2024-07-24 23:21:23.773969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f0ff8 00:21:01.311 [2024-07-24 23:21:23.776008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.311 [2024-07-24 23:21:23.776043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.311 [2024-07-24 23:21:23.789342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f0788 00:21:01.311 [2024-07-24 23:21:23.791466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.311 [2024-07-24 23:21:23.791498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.569 [2024-07-24 23:21:23.805035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eff18 00:21:01.570 [2024-07-24 23:21:23.806984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.807026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.820369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ef6a8 00:21:01.570 [2024-07-24 23:21:23.822459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.822501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.835972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eee38 00:21:01.570 [2024-07-24 23:21:23.837957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.837997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.850833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ee5c8 00:21:01.570 [2024-07-24 23:21:23.852828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.852869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.865744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190edd58 00:21:01.570 [2024-07-24 23:21:23.867648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.867688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.880655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ed4e8 00:21:01.570 [2024-07-24 23:21:23.882608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.882649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.895517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ecc78 00:21:01.570 [2024-07-24 23:21:23.897364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.897405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.911032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ec408 00:21:01.570 [2024-07-24 23:21:23.913005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.913046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.926020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ebb98 00:21:01.570 [2024-07-24 23:21:23.927825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.927859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.941161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eb328 00:21:01.570 [2024-07-24 23:21:23.943094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.943137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.955885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eaab8 00:21:01.570 [2024-07-24 23:21:23.957771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.957805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.970818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ea248 00:21:01.570 [2024-07-24 23:21:23.972762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.972803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:23.985748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e99d8 00:21:01.570 [2024-07-24 23:21:23.987445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:23.987482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:24.000698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e9168 00:21:01.570 [2024-07-24 23:21:24.002533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:24.002573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:24.015923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e88f8 00:21:01.570 [2024-07-24 23:21:24.017769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:24.017802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:24.031245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e8088 00:21:01.570 [2024-07-24 23:21:24.032973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:24.033024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.570 [2024-07-24 23:21:24.046403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e7818 00:21:01.570 [2024-07-24 23:21:24.048062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.570 [2024-07-24 23:21:24.048098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.062705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e6fa8 00:21:01.830 [2024-07-24 23:21:24.064430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.064463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.080724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e6738 00:21:01.830 [2024-07-24 23:21:24.082485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.082518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.097490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e5ec8 00:21:01.830 [2024-07-24 23:21:24.099530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.099579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.114730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e5658 00:21:01.830 [2024-07-24 23:21:24.116403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.116436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.130894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e4de8 00:21:01.830 [2024-07-24 23:21:24.132598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.132631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.146679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e4578 00:21:01.830 [2024-07-24 23:21:24.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.148337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.163060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e3d08 00:21:01.830 [2024-07-24 23:21:24.164973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.165040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.179392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e3498 00:21:01.830 [2024-07-24 23:21:24.180908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.180942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.194275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e2c28 00:21:01.830 [2024-07-24 23:21:24.195845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.195881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.210185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e23b8 00:21:01.830 [2024-07-24 23:21:24.212051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.212088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.225781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e1b48 00:21:01.830 [2024-07-24 23:21:24.227177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.227236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.240325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e12d8 00:21:01.830 [2024-07-24 23:21:24.241709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.241742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.254913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e0a68 00:21:01.830 [2024-07-24 23:21:24.256466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.256499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.269348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e01f8 00:21:01.830 [2024-07-24 23:21:24.270701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.270750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.284805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190df988 00:21:01.830 [2024-07-24 23:21:24.286113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.286169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.830 [2024-07-24 23:21:24.299422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190df118 00:21:01.830 [2024-07-24 23:21:24.300797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.830 [2024-07-24 23:21:24.300829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.314551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190de8a8 00:21:02.089 [2024-07-24 23:21:24.315887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.315942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.329630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190de038 00:21:02.089 [2024-07-24 23:21:24.330911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.330944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.351585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190de038 00:21:02.089 [2024-07-24 23:21:24.354093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.354159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.366822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190de8a8 00:21:02.089 [2024-07-24 23:21:24.369330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.369367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.382837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190df118 00:21:02.089 [2024-07-24 23:21:24.385340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.385377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.397924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190df988 00:21:02.089 [2024-07-24 23:21:24.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.400514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.413234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e01f8 00:21:02.089 [2024-07-24 23:21:24.415601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.415636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.429160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e0a68 00:21:02.089 [2024-07-24 23:21:24.431729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.431769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.444698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e12d8 00:21:02.089 [2024-07-24 23:21:24.447054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.447094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.460898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e1b48 00:21:02.089 [2024-07-24 23:21:24.463352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.463393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.476026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e23b8 00:21:02.089 [2024-07-24 23:21:24.478344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.478383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.491125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e2c28 00:21:02.089 [2024-07-24 23:21:24.493564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.089 [2024-07-24 23:21:24.493604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:02.089 [2024-07-24 23:21:24.508744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e3498 00:21:02.090 [2024-07-24 23:21:24.511495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.090 [2024-07-24 23:21:24.511536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:02.090 [2024-07-24 23:21:24.525767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e3d08 00:21:02.090 [2024-07-24 23:21:24.528167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.090 [2024-07-24 23:21:24.528203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:02.090 [2024-07-24 23:21:24.541902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e4578 00:21:02.090 [2024-07-24 23:21:24.544267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.090 [2024-07-24 23:21:24.544302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:02.090 [2024-07-24 23:21:24.557412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e4de8 00:21:02.090 [2024-07-24 23:21:24.559717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.090 [2024-07-24 23:21:24.559752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.574310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e5658 00:21:02.349 [2024-07-24 23:21:24.576610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.576645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.590811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e5ec8 00:21:02.349 [2024-07-24 23:21:24.593077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.593111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.606586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e6738 00:21:02.349 [2024-07-24 23:21:24.608871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.608911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.622092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e6fa8 00:21:02.349 [2024-07-24 23:21:24.624381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.624423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.637521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e7818 00:21:02.349 [2024-07-24 23:21:24.639729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.639770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.653049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e8088 00:21:02.349 [2024-07-24 23:21:24.655179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.655219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.668289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e88f8 00:21:02.349 [2024-07-24 23:21:24.670387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.670429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.683654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e9168 00:21:02.349 [2024-07-24 23:21:24.686143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.686183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.699696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190e99d8 00:21:02.349 [2024-07-24 23:21:24.701894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.715600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ea248 00:21:02.349 [2024-07-24 23:21:24.717701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.717735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.731114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eaab8 00:21:02.349 [2024-07-24 23:21:24.733211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.733251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.746374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eb328 00:21:02.349 [2024-07-24 23:21:24.748557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.748594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.761443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ebb98 00:21:02.349 [2024-07-24 23:21:24.763505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.763559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.777279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ec408 00:21:02.349 [2024-07-24 23:21:24.779234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.779274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.792440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ecc78 00:21:02.349 [2024-07-24 23:21:24.794407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.794445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.807232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ed4e8 00:21:02.349 [2024-07-24 23:21:24.809152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:02.349 [2024-07-24 23:21:24.821710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190edd58 00:21:02.349 [2024-07-24 23:21:24.823664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.349 [2024-07-24 23:21:24.823695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.837185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ee5c8 00:21:02.608 [2024-07-24 23:21:24.839143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.839198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.852798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eee38 00:21:02.608 [2024-07-24 23:21:24.854713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.854744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.869529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190ef6a8 00:21:02.608 [2024-07-24 23:21:24.871775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.871812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.886824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190eff18 00:21:02.608 [2024-07-24 23:21:24.888870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.888901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.905187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f0788 00:21:02.608 [2024-07-24 23:21:24.907547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.907581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.922489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f0ff8 00:21:02.608 [2024-07-24 23:21:24.924439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.924472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.938809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f1868 00:21:02.608 [2024-07-24 23:21:24.940869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.608 [2024-07-24 23:21:24.940901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:02.608 [2024-07-24 23:21:24.956314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f20d8 00:21:02.608 [2024-07-24 23:21:24.958156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:24.958201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:24.972580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f2948 00:21:02.609 [2024-07-24 23:21:24.974456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:24.974492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:24.989737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f31b8 00:21:02.609 [2024-07-24 23:21:24.991529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:24.991576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:25.007266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f3a28 00:21:02.609 [2024-07-24 23:21:25.009325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:25.009376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:25.024669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f4298 00:21:02.609 [2024-07-24 23:21:25.026459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:25.026493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:25.039873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f4b08 00:21:02.609 [2024-07-24 23:21:25.041661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:25.041695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:25.054862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f5378 00:21:02.609 [2024-07-24 23:21:25.056644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:25.056677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:25.070457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f5be8 00:21:02.609 [2024-07-24 23:21:25.072138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:25.072201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:02.609 [2024-07-24 23:21:25.085324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f6458 00:21:02.609 [2024-07-24 23:21:25.086933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.609 [2024-07-24 23:21:25.086967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.101400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f6cc8 00:21:02.869 [2024-07-24 23:21:25.103013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.103062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.116603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f7538 00:21:02.869 [2024-07-24 23:21:25.118106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.118179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.131048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f7da8 00:21:02.869 [2024-07-24 23:21:25.132722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.132754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.145619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f8618 00:21:02.869 [2024-07-24 23:21:25.147116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.147171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.160441] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f8e88 00:21:02.869 [2024-07-24 23:21:25.161934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.161965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.174993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f96f8 00:21:02.869 [2024-07-24 23:21:25.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.176614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.189529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190f9f68 00:21:02.869 [2024-07-24 23:21:25.190967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.190999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.204547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fa7d8 00:21:02.869 [2024-07-24 23:21:25.206053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.206087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.219732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fb048 00:21:02.869 [2024-07-24 23:21:25.221463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.221496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.235278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fb8b8 00:21:02.869 [2024-07-24 23:21:25.236864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.236904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.249955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fc128 00:21:02.869 [2024-07-24 23:21:25.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.251448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.264663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fc998 00:21:02.869 [2024-07-24 23:21:25.266044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.266080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.279358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fd208 00:21:02.869 [2024-07-24 23:21:25.280792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.280827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.294202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fda78 00:21:02.869 [2024-07-24 23:21:25.295571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.295605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.308844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fe2e8 00:21:02.869 [2024-07-24 23:21:25.310277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.310328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.323518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190feb58 00:21:02.869 [2024-07-24 23:21:25.324925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.324963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:02.869 [2024-07-24 23:21:25.340301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12ce640) with pdu=0x2000190fef90 00:21:02.869 [2024-07-24 23:21:25.340397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:02.869 [2024-07-24 23:21:25.340424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.869 00:21:02.869 Latency(us) 00:21:02.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.869 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:02.869 nvme0n1 : 2.01 16258.19 63.51 0.00 0.00 7864.90 6583.39 29312.47 00:21:02.869 =================================================================================================================== 00:21:02.869 Total : 16258.19 63.51 0.00 0.00 7864.90 6583.39 29312.47 00:21:02.869 0 00:21:03.128 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:03.128 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:03.128 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:03.128 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:03.128 | .driver_specific 00:21:03.128 | .nvme_error 00:21:03.128 | .status_code 00:21:03.128 | .command_transient_transport_error' 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 128 > 0 )) 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80936 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80936 ']' 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80936 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80936 00:21:03.387 killing process with pid 80936 00:21:03.387 Received shutdown signal, test time was about 2.000000 seconds 00:21:03.387 00:21:03.387 Latency(us) 00:21:03.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.387 =================================================================================================================== 00:21:03.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80936' 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80936 00:21:03.387 23:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80936 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80995 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80995 /var/tmp/bperf.sock 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80995 ']' 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:03.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.646 23:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:03.646 [2024-07-24 23:21:26.062828] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:03.646 [2024-07-24 23:21:26.063207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80995 ] 00:21:03.646 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:03.646 Zero copy mechanism will not be used. 00:21:03.904 [2024-07-24 23:21:26.199707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.904 [2024-07-24 23:21:26.349809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.163 [2024-07-24 23:21:26.424961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:04.730 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.730 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:21:04.730 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:04.730 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:04.989 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:04.989 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.989 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:04.989 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.989 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:04.989 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.248 nvme0n1 00:21:05.248 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:05.248 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.248 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:05.248 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.248 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:05.248 23:21:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:05.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:05.248 Zero copy mechanism will not be used. 00:21:05.248 Running I/O for 2 seconds... 00:21:05.248 [2024-07-24 23:21:27.703977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.248 [2024-07-24 23:21:27.704352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.248 [2024-07-24 23:21:27.704384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.248 [2024-07-24 23:21:27.709173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.248 [2024-07-24 23:21:27.709480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.248 [2024-07-24 23:21:27.709508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.248 [2024-07-24 23:21:27.714117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.248 [2024-07-24 23:21:27.714441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.248 [2024-07-24 23:21:27.714489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.248 [2024-07-24 23:21:27.719180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.248 [2024-07-24 23:21:27.719503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.248 [2024-07-24 23:21:27.719531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.248 [2024-07-24 23:21:27.724236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.248 [2024-07-24 23:21:27.724573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.248 [2024-07-24 23:21:27.724600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.248 [2024-07-24 23:21:27.729465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.248 [2024-07-24 23:21:27.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.248 [2024-07-24 23:21:27.729785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.734849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.735156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.735208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.739732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.740052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.740080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.744663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.744942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.744969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.749493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.749771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.749797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.754643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.754920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.754947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.759861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.760207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.760235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.764923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.765258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.765285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.770080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.509 [2024-07-24 23:21:27.770418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.509 [2024-07-24 23:21:27.770448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.509 [2024-07-24 23:21:27.775267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.775542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.775568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.779913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.780281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.780313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.784895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.785181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.785217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.789669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.789939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.789965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.794871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.795141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.795176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.800187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.800527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.800552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.804864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.805133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.805167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.809673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.809945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.809970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.814429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.814701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.814726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.819695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.820025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.820053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.824881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.825170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.825209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.829657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.829930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.829956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.834392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.834663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.834688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.839160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.839432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.839458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.843748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.844062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.844091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.848560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.848830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.848855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.853288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.853580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.853605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.857971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.858286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.858317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.862710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.862981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.863007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.867415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.867703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.867746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.872343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.872650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.872676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.877287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.877586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.877613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.882664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.882979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.883007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.888065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.888461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.888518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.893702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.894047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.510 [2024-07-24 23:21:27.894074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.510 [2024-07-24 23:21:27.898677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.510 [2024-07-24 23:21:27.898959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.898986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.903571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.903850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.903876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.908468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.908759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.908785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.913356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.913653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.913679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.918115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.918449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.918481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.923062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.923376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.923402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.927971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.928297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.928339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.933002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.933364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.933396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.937992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.938319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.942940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.943251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.943281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.947901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.948260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.948306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.952868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.953161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.953197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.957706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.957983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.958003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.962590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.962871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.962898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.967686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.967997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.968025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.972722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.973025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.973053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.977622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.977915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.977942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.982629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.982914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.982941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.511 [2024-07-24 23:21:27.987503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.511 [2024-07-24 23:21:27.987811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.511 [2024-07-24 23:21:27.987832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:27.992952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:27.993482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:27.993680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:27.998542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:27.999020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:27.999054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.003659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.003989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.004026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.008619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.008900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.013516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.013812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.013839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.018401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.018679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.018705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.023269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.023551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.023578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.027993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.028319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.028367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.032965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.033275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.033308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.037880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.038190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.772 [2024-07-24 23:21:28.038216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.772 [2024-07-24 23:21:28.042793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.772 [2024-07-24 23:21:28.043074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.047586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.047866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.047892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.052703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.053083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.058156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.058499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.058531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.063198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.063480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.063506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.067983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.068307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.068340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.073101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.073429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.073510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.078571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.078900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.078929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.084072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.084433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.084462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.089016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.089352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.089383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.094124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.094463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.094490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.099104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.099436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.099468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.104022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.104357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.104386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.109026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.109422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.114114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.114411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.114437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.119013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.119341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.119373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.124404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.124723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.124749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.129911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.130222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.130258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.134913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.135224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.135250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.140353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.140673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.140715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.145767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.146053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.146080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.150697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.773 [2024-07-24 23:21:28.150980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.773 [2024-07-24 23:21:28.151006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.773 [2024-07-24 23:21:28.155585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.155864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.155892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.160486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.160768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.160796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.165587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.165872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.165898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.170634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.170945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.175539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.175819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.175846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.180401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.180683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.180709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.185732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.186037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.186065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.190852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.191152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.191188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.195701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.196017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.196044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.200624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.200909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.200935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.205499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.205797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.205823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.210516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.210797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.210818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.215696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.216018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.220779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.221070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.221097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.225909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.226245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.226272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.231179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.231505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.231547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.236371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.236684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.236713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.241742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.242032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.242060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.246949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.247289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.247318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:05.774 [2024-07-24 23:21:28.252262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:05.774 [2024-07-24 23:21:28.252609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:05.774 [2024-07-24 23:21:28.252668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.257796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.258084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.258111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.263106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.263452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.263485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.268121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.268428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.268461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.273315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.273680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.273712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.279273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.279593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.279620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.285026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.285374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.285407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.290162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.290497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.035 [2024-07-24 23:21:28.290531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.035 [2024-07-24 23:21:28.295331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.035 [2024-07-24 23:21:28.295683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.295716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.300571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.300892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.300924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.305810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.306152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.306201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.311649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.311998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.312030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.317099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.317428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.317459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.322546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.322890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.322922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.327607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.327958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.327990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.332902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.333267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.333298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.338316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.338668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.338701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.343833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.344210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.344242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.348975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.349328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.349360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.353941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.354276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.354307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.358954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.359291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.359322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.363902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.364229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.364260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.368820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.369163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.369207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.374075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.374419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.374450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.379144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.379499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.379526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.384823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.385153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.385176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.390076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.390422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.390454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.395069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.395408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.395441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.400453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.400829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.405878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.406225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.406257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.410873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.411219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.411250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.416185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.416516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.416547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.421424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.421757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.421788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.426945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.427301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.427333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.432162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.432489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.036 [2024-07-24 23:21:28.432521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.036 [2024-07-24 23:21:28.437303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.036 [2024-07-24 23:21:28.437625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.437656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.442181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.442525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.442556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.447579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.447926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.447958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.452972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.453319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.453366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.458296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.458661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.458691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.463383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.463701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.463722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.468429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.468745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.468766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.473444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.473744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.473775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.478331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.478645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.478675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.483549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.483887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.483944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.488531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.488848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.493312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.493656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.498580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.498897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.498929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.503356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.503677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.503708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.508303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.508626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.508656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.037 [2024-07-24 23:21:28.513298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.037 [2024-07-24 23:21:28.513593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.037 [2024-07-24 23:21:28.513616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.518531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.518876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.518909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.523661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.524025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.524059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.528702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.529026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.529058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.533875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.534227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.534269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.538971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.539336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.539368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.544158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.544498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.544530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.549256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.549580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.549612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.554298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.554657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.554689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.559256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.559579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.559610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.564221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.564571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.564602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.569310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.569651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.569683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.574695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.575059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.575092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.579769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.580113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.580160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.585022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.585349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.585382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.590299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.590599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.590636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.298 [2024-07-24 23:21:28.595838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.298 [2024-07-24 23:21:28.596161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.298 [2024-07-24 23:21:28.596204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.601281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.601579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.601610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.606980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.607318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.607350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.612725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.613070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.613102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.618178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.618497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.618544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.623693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.624056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.624100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.629059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.629416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.629447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.634376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.634747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.634778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.639487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.639833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.639861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.644984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.645322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.645355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.650386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.650691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.650723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.655779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.656191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.661286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.661633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.661666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.666581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.666907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.666939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.671589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.671923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.671971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.676633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.676958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.676991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.681563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.681890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.681921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.686758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.687092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.687124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.691934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.692259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.692291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.697287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.697619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.697651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.702216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.702542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.702574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.707178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.707491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.707514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.711847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.711971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.711994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.717240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.717326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.717348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.722235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.722332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.722354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.727139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.727237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.727260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.732215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.732298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.299 [2024-07-24 23:21:28.732321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.299 [2024-07-24 23:21:28.737136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.299 [2024-07-24 23:21:28.737246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.737269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.742033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.742131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.742155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.747021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.747134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.747157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.751828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.751954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.751979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.756828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.756923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.756945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.761711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.761809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.761832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.766717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.766818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.766842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.772132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.772225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.772249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.300 [2024-07-24 23:21:28.777152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.300 [2024-07-24 23:21:28.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.300 [2024-07-24 23:21:28.777278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.782626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.782726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.782749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.787949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.788024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.788047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.792802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.792905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.792928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.797726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.797827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.797850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.802600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.802721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.807482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.807584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.807607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.812732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.812837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.560 [2024-07-24 23:21:28.812861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.560 [2024-07-24 23:21:28.817759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.560 [2024-07-24 23:21:28.817858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.817881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.822666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.822765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.822788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.827515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.827613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.827636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.832906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.833010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.833034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.838235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.838336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.838359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.843099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.843215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.843236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.848228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.848331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.848354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.853638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.853733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.853754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.859046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.859142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.859163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.864342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.864463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.864485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.869191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.869301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.869322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.874046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.874139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.874161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.878794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.878889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.878910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.883698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.883794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.883815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.888587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.888689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.888710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.893691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.893793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.893814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.898657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.898754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.898778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.904049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.904119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.904143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.909389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.909485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.909506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.914217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.914312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.914335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.919488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.919568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.919590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.924725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.924818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.924839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.929642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.929739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.929760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.934605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.934728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.939431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.939534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.939557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.944345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.561 [2024-07-24 23:21:28.944457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.561 [2024-07-24 23:21:28.944478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.561 [2024-07-24 23:21:28.949211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.949307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.949328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.954060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.954164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.954199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.959042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.959138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.959159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.963822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.963942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.963964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.968697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.968793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.968814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.973485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.973596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.973616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.978439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.978535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.978556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.983613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.983725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.983746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.988590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.988685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.988706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.993586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.993696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.993717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:28.999812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:28.999956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:28.999979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.006491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.006608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.006630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.012387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.012505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.012527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.017969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.018075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.018098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.023509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.023607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.023628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.029568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.029651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.029672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.035228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.035316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.035339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.562 [2024-07-24 23:21:29.042024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.562 [2024-07-24 23:21:29.042124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.562 [2024-07-24 23:21:29.042147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.822 [2024-07-24 23:21:29.047606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.822 [2024-07-24 23:21:29.047730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.822 [2024-07-24 23:21:29.047752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.822 [2024-07-24 23:21:29.052964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.822 [2024-07-24 23:21:29.053070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.822 [2024-07-24 23:21:29.053092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.822 [2024-07-24 23:21:29.058436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.822 [2024-07-24 23:21:29.058547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.822 [2024-07-24 23:21:29.058568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.822 [2024-07-24 23:21:29.063890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.822 [2024-07-24 23:21:29.064004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.822 [2024-07-24 23:21:29.064026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.822 [2024-07-24 23:21:29.069454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.822 [2024-07-24 23:21:29.069568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.822 [2024-07-24 23:21:29.069589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.822 [2024-07-24 23:21:29.074589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.074681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.074701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.079698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.079794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.079815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.084878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.084975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.085029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.090338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.090420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.090440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.095627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.095721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.095741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.100302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.100412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.100432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.104907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.104998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.105018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.109903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.110000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.110020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.114892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.115003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.115023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.119998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.120067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.120089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.125044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.125126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.125162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.129760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.129852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.129872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.134466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.134559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.134579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.139178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.139272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.139293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.143798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.143891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.143911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.148548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.148640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.148660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.153249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.153345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.153365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.157921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.158015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.158035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.163042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.163134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.163155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.168236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.168377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.168397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.172875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.172967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.172987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.177995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.178087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.178107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.183200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.183316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.187796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.187889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.187909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.192443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.192555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.197259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.197355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.197376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.823 [2024-07-24 23:21:29.201936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.823 [2024-07-24 23:21:29.202028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.823 [2024-07-24 23:21:29.202049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.206578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.206670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.206690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.211204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.211303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.211322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.215734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.215826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.215846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.220316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.220422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.220443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.224961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.225052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.225073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.229731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.229824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.229845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.234412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.234503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.234523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.238958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.239050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.239070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.243638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.243730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.243750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.248330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.248407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.248428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.252899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.253008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.253029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.257774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.257867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.257887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.262411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.262503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.262523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.266981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.267075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.267095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.271710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.271802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.271822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.276366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.276474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.276494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.280944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.281036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.281056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.285639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.285732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.285752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.290348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.290439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.290459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.294933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.295026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.295046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.299523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.299616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.299636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:06.824 [2024-07-24 23:21:29.304693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:06.824 [2024-07-24 23:21:29.304786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.824 [2024-07-24 23:21:29.304806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.309466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.309576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.309596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.314337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.314429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.314449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.318967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.319059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.319079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.323534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.323627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.323647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.328094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.328178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.328200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.332847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.332943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.332963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.337599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.337692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.337712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.342243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.342335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.342356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.347137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.347266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.347300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.352397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.352506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.352542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.357345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.357441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.357462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.362104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.362215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.362235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.366840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.366934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.366955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.371545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.371640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.371661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.376209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.376305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.376326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.380921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.381015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.381035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.385659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.385752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.385772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.390417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.390510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.390531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.395067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.395178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.395199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.399766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.399859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.399879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.404447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.404543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.404564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.409098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.409227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.409248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.085 [2024-07-24 23:21:29.413836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.085 [2024-07-24 23:21:29.413928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.085 [2024-07-24 23:21:29.413948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.418777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.418861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.418883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.424135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.424219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.424242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.429505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.429624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.429645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.434774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.434872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.434894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.440126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.440212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.440234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.445405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.445473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.445495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.450572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.450667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.450688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.455634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.455730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.455751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.460640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.460735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.460756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.465530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.465641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.465665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.470411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.470491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.470512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.475223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.475307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.475328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.479998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.480069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.480092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.484996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.485081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.485103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.489940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.490020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.490040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.494691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.494768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.494789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.499506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.499600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.499620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.504479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.504561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.504582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.509213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.509294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.509315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.514010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.514088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.514109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.518835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.518915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.518935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.523854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.523977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.524000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.528770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.528850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.528871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.533650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.533731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.533751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.538548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.538626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.538648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.543473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.543553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.086 [2024-07-24 23:21:29.543574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.086 [2024-07-24 23:21:29.548242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.086 [2024-07-24 23:21:29.548321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.087 [2024-07-24 23:21:29.548342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.087 [2024-07-24 23:21:29.553043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.087 [2024-07-24 23:21:29.553122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.087 [2024-07-24 23:21:29.553143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.087 [2024-07-24 23:21:29.557963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.087 [2024-07-24 23:21:29.558063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.087 [2024-07-24 23:21:29.558085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.087 [2024-07-24 23:21:29.562962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.087 [2024-07-24 23:21:29.563043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.087 [2024-07-24 23:21:29.563063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.568281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.568348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.568369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.573318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.573386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.573408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.578191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.578272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.578294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.583023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.583106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.583127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.587855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.587965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.587987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.592819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.597650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.597730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.602489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.602586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.602606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.607655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.607754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.607776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.612882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.612954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.618012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.618098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.618120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.623256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.623327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.623348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.628705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.628775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.347 [2024-07-24 23:21:29.628797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.347 [2024-07-24 23:21:29.634004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.347 [2024-07-24 23:21:29.634095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.634117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.639360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.639433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.639455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.644652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.644732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.644752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.649763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.649844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.649864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.654879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.654989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.655010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.660128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.660213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.660235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.665235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.665314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.665334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.670068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.670194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.670215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.674904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.674984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.675004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.680025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.680111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.680133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.685367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.685445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.685465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:07.348 [2024-07-24 23:21:29.690078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1492020) with pdu=0x2000190fef90 00:21:07.348 [2024-07-24 23:21:29.690187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.348 [2024-07-24 23:21:29.690208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:07.348 00:21:07.348 Latency(us) 00:21:07.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.348 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:07.348 nvme0n1 : 2.00 6113.53 764.19 0.00 0.00 2611.28 1995.87 9889.98 00:21:07.348 =================================================================================================================== 00:21:07.348 Total : 6113.53 764.19 0.00 0.00 2611.28 1995.87 9889.98 00:21:07.348 0 00:21:07.348 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:07.348 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:07.348 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:07.348 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:07.348 | .driver_specific 00:21:07.348 | .nvme_error 00:21:07.348 | .status_code 00:21:07.348 | .command_transient_transport_error' 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 394 > 0 )) 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80995 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80995 ']' 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80995 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.669 23:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80995 00:21:07.669 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:07.669 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:07.669 killing process with pid 80995 00:21:07.669 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80995' 00:21:07.669 Received shutdown signal, test time was about 2.000000 seconds 00:21:07.669 00:21:07.669 Latency(us) 00:21:07.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.669 =================================================================================================================== 00:21:07.669 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.669 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80995 00:21:07.669 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80995 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80783 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80783 ']' 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80783 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80783 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.928 killing process with pid 80783 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80783' 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80783 00:21:07.928 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80783 00:21:08.496 00:21:08.496 real 0m19.082s 00:21:08.496 user 0m36.456s 00:21:08.496 sys 0m5.130s 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:08.496 ************************************ 00:21:08.496 END TEST nvmf_digest_error 00:21:08.496 ************************************ 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.496 rmmod nvme_tcp 00:21:08.496 rmmod nvme_fabrics 00:21:08.496 rmmod nvme_keyring 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80783 ']' 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80783 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80783 ']' 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80783 00:21:08.496 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80783) - No such process 00:21:08.496 Process with pid 80783 is not found 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80783 is not found' 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:08.496 00:21:08.496 real 0m39.344s 00:21:08.496 user 1m14.187s 00:21:08.496 sys 0m10.571s 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:08.496 23:21:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:08.496 ************************************ 00:21:08.497 END TEST nvmf_digest 00:21:08.497 ************************************ 00:21:08.497 23:21:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:08.497 23:21:30 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:21:08.497 23:21:30 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:21:08.497 23:21:30 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:08.497 23:21:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:08.497 23:21:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:08.497 23:21:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:08.497 ************************************ 00:21:08.497 START TEST nvmf_host_multipath 00:21:08.497 ************************************ 00:21:08.497 23:21:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:08.756 * Looking for test storage... 00:21:08.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:08.756 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:08.757 Cannot find device "nvmf_tgt_br" 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:08.757 Cannot find device "nvmf_tgt_br2" 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:08.757 Cannot find device "nvmf_tgt_br" 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:08.757 Cannot find device "nvmf_tgt_br2" 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:08.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:08.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:08.757 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:09.016 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:09.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:09.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:21:09.017 00:21:09.017 --- 10.0.0.2 ping statistics --- 00:21:09.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.017 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:09.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:09.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:09.017 00:21:09.017 --- 10.0.0.3 ping statistics --- 00:21:09.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.017 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:09.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:09.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:09.017 00:21:09.017 --- 10.0.0.1 ping statistics --- 00:21:09.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:09.017 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81254 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81254 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81254 ']' 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.017 23:21:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:09.017 [2024-07-24 23:21:31.484793] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:21:09.017 [2024-07-24 23:21:31.484917] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.275 [2024-07-24 23:21:31.629163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:09.534 [2024-07-24 23:21:31.777304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.534 [2024-07-24 23:21:31.777387] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.534 [2024-07-24 23:21:31.777402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.534 [2024-07-24 23:21:31.777414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.534 [2024-07-24 23:21:31.777423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.534 [2024-07-24 23:21:31.778168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.534 [2024-07-24 23:21:31.778200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.534 [2024-07-24 23:21:31.856315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81254 00:21:10.101 23:21:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:10.359 [2024-07-24 23:21:32.752394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.359 23:21:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:10.618 Malloc0 00:21:10.618 23:21:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:10.877 23:21:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.135 23:21:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.393 [2024-07-24 23:21:33.778842] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.393 23:21:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:11.652 [2024-07-24 23:21:34.067051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81310 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81310 /var/tmp/bdevperf.sock 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81310 ']' 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.652 23:21:34 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:12.592 23:21:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.592 23:21:35 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:21:12.592 23:21:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:13.159 23:21:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:13.159 Nvme0n1 00:21:13.418 23:21:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:13.677 Nvme0n1 00:21:13.677 23:21:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:13.677 23:21:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:14.620 23:21:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:14.620 23:21:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:14.878 23:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:15.136 23:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:15.136 23:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:15.136 23:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81355 00:21:15.136 23:21:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.703 Attaching 4 probes... 00:21:21.703 @path[10.0.0.2, 4421]: 17745 00:21:21.703 @path[10.0.0.2, 4421]: 18462 00:21:21.703 @path[10.0.0.2, 4421]: 18235 00:21:21.703 @path[10.0.0.2, 4421]: 19074 00:21:21.703 @path[10.0.0.2, 4421]: 18821 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81355 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:21.703 23:21:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:21.703 23:21:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:21.962 23:21:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:21.962 23:21:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81473 00:21:21.962 23:21:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:21.962 23:21:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:28.523 Attaching 4 probes... 00:21:28.523 @path[10.0.0.2, 4420]: 18349 00:21:28.523 @path[10.0.0.2, 4420]: 18746 00:21:28.523 @path[10.0.0.2, 4420]: 18706 00:21:28.523 @path[10.0.0.2, 4420]: 18652 00:21:28.523 @path[10.0.0.2, 4420]: 18664 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81473 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:28.523 23:21:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:28.782 23:21:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:28.782 23:21:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:28.782 23:21:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81586 00:21:28.782 23:21:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.390 Attaching 4 probes... 00:21:35.390 @path[10.0.0.2, 4421]: 15443 00:21:35.390 @path[10.0.0.2, 4421]: 18244 00:21:35.390 @path[10.0.0.2, 4421]: 18119 00:21:35.390 @path[10.0.0.2, 4421]: 18688 00:21:35.390 @path[10.0.0.2, 4421]: 18753 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81586 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:35.390 23:21:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:35.649 23:21:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:35.649 23:21:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:35.649 23:21:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81698 00:21:35.649 23:21:58 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:42.202 Attaching 4 probes... 00:21:42.202 00:21:42.202 00:21:42.202 00:21:42.202 00:21:42.202 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81698 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:42.202 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:42.460 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:42.460 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:42.460 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81816 00:21:42.460 23:22:04 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:49.016 23:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:49.016 23:22:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:49.016 Attaching 4 probes... 00:21:49.016 @path[10.0.0.2, 4421]: 17561 00:21:49.016 @path[10.0.0.2, 4421]: 17636 00:21:49.016 @path[10.0.0.2, 4421]: 17892 00:21:49.016 @path[10.0.0.2, 4421]: 17848 00:21:49.016 @path[10.0.0.2, 4421]: 17689 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81816 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:49.016 23:22:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:50.391 23:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:50.391 23:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81935 00:21:50.391 23:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:50.391 23:22:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:56.951 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:56.951 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:56.951 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:56.951 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.951 Attaching 4 probes... 00:21:56.951 @path[10.0.0.2, 4420]: 17607 00:21:56.951 @path[10.0.0.2, 4420]: 17693 00:21:56.951 @path[10.0.0.2, 4420]: 17654 00:21:56.951 @path[10.0.0.2, 4420]: 18138 00:21:56.951 @path[10.0.0.2, 4420]: 17486 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81935 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.952 23:22:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:56.952 [2024-07-24 23:22:19.084474] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:56.952 23:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:56.952 23:22:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:03.512 23:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:03.512 23:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82110 00:22:03.512 23:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81254 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:03.512 23:22:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:10.104 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:10.104 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:10.104 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:10.104 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:10.105 Attaching 4 probes... 00:22:10.105 @path[10.0.0.2, 4421]: 16813 00:22:10.105 @path[10.0.0.2, 4421]: 17088 00:22:10.105 @path[10.0.0.2, 4421]: 17263 00:22:10.105 @path[10.0.0.2, 4421]: 17422 00:22:10.105 @path[10.0.0.2, 4421]: 17383 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82110 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81310 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81310 ']' 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81310 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81310 00:22:10.105 killing process with pid 81310 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81310' 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81310 00:22:10.105 23:22:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81310 00:22:10.105 Connection closed with partial response: 00:22:10.105 00:22:10.105 00:22:10.105 23:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81310 00:22:10.105 23:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:10.105 [2024-07-24 23:21:34.147109] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:10.105 [2024-07-24 23:21:34.147261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81310 ] 00:22:10.105 [2024-07-24 23:21:34.284915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.105 [2024-07-24 23:21:34.448869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.105 [2024-07-24 23:21:34.524821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:10.105 Running I/O for 90 seconds... 00:22:10.105 [2024-07-24 23:21:44.365022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.105 [2024-07-24 23:21:44.365765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.365799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.365850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.365884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.365918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.365952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.365971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.365984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.366026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.366047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.366061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.366080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.366095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.366114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.105 [2024-07-24 23:21:44.366128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:10.105 [2024-07-24 23:21:44.366147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.366612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.366626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.369417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.369968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.106 [2024-07-24 23:21:44.369982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:10.106 [2024-07-24 23:21:44.370019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.106 [2024-07-24 23:21:44.370038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.370577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.370855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.370869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.371783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.371810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.371836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.371871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.371885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.371904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.371950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.371973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.371988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.372023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.372058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.107 [2024-07-24 23:21:44.372092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:10.107 [2024-07-24 23:21:44.372368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.107 [2024-07-24 23:21:44.372398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.372986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.108 [2024-07-24 23:21:44.373034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:44.373315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:44.373329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.927863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.927986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.108 [2024-07-24 23:21:50.928594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:10.108 [2024-07-24 23:21:50.928613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.928627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.928660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.928965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.928994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.109 [2024-07-24 23:21:50.929599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:10.109 [2024-07-24 23:21:50.929728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.109 [2024-07-24 23:21:50.929742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.929973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.929987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.110 [2024-07-24 23:21:50.930797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.930967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.930981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.931000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.931015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.931042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.931057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.931077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.931091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.931112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.931127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:10.110 [2024-07-24 23:21:50.931148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.110 [2024-07-24 23:21:50.931162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.931756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.931967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.931983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.932376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.111 [2024-07-24 23:21:50.933136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:10.111 [2024-07-24 23:21:50.933593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.111 [2024-07-24 23:21:50.933612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:50.933944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:50.933960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.041951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.042907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.042941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.042976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.042995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.043009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.043052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.043088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.043122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.043173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.112 [2024-07-24 23:21:58.043209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.043243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.112 [2024-07-24 23:21:58.043278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:10.112 [2024-07-24 23:21:58.043297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.043921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.043978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.043999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.113 [2024-07-24 23:21:58.044263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:10.113 [2024-07-24 23:21:58.044819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.113 [2024-07-24 23:21:58.044834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.044854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.044869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.044890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.044905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.044926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.044940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.044969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.044985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.114 [2024-07-24 23:21:58.045568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.045977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.045998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.046013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.046034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.046049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.114 [2024-07-24 23:21:58.046085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:10.114 [2024-07-24 23:21:58.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.046168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.046713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.046728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.115 [2024-07-24 23:21:58.047407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:21:58.047848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:21:58.047864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.469934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.115 [2024-07-24 23:22:11.470483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.115 [2024-07-24 23:22:11.470496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.470508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.470533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.470558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.470584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.470609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.470977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.470990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.471021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.471048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.116 [2024-07-24 23:22:11.471501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.471545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.471587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.471614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.116 [2024-07-24 23:22:11.471640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.116 [2024-07-24 23:22:11.471654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.471982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.471997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.117 [2024-07-24 23:22:11.472344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.117 [2024-07-24 23:22:11.472861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.117 [2024-07-24 23:22:11.472873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.472887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.472899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.472913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.472925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.472939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.472956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.472971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.472983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.472997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.118 [2024-07-24 23:22:11.473246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.118 [2024-07-24 23:22:11.473603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b0150 is same with the state(5) to be set 00:22:10.118 [2024-07-24 23:22:11.473633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.118 [2024-07-24 23:22:11.473643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.118 [2024-07-24 23:22:11.473653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32256 len:8 PRP1 0x0 PRP2 0x0 00:22:10.118 [2024-07-24 23:22:11.473671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.118 [2024-07-24 23:22:11.473695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.118 [2024-07-24 23:22:11.473704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32264 len:8 PRP1 0x0 PRP2 0x0 00:22:10.118 [2024-07-24 23:22:11.473716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.118 [2024-07-24 23:22:11.473737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.118 [2024-07-24 23:22:11.473746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32272 len:8 PRP1 0x0 PRP2 0x0 00:22:10.118 [2024-07-24 23:22:11.473758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.118 [2024-07-24 23:22:11.473779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.118 [2024-07-24 23:22:11.473788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32664 len:8 PRP1 0x0 PRP2 0x0 00:22:10.118 [2024-07-24 23:22:11.473800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.118 [2024-07-24 23:22:11.473812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.118 [2024-07-24 23:22:11.473821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.473830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32672 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.473842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.473854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.119 [2024-07-24 23:22:11.473869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.473878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32680 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.473890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.473902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.119 [2024-07-24 23:22:11.473911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.473920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32688 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.473932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.473943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.119 [2024-07-24 23:22:11.473952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.473967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32696 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.473979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.473991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.119 [2024-07-24 23:22:11.474000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.474014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32704 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.474026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.119 [2024-07-24 23:22:11.474047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.474057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32712 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.474068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:10.119 [2024-07-24 23:22:11.474089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:10.119 [2024-07-24 23:22:11.474098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32720 len:8 PRP1 0x0 PRP2 0x0 00:22:10.119 [2024-07-24 23:22:11.474110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474185] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11b0150 was disconnected and freed. reset controller. 00:22:10.119 [2024-07-24 23:22:11.474293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.119 [2024-07-24 23:22:11.474317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.119 [2024-07-24 23:22:11.474344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.119 [2024-07-24 23:22:11.474370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.119 [2024-07-24 23:22:11.474394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.119 [2024-07-24 23:22:11.474426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.119 [2024-07-24 23:22:11.474446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b100 is same with the state(5) to be set 00:22:10.119 [2024-07-24 23:22:11.475516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:10.119 [2024-07-24 23:22:11.475569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113b100 (9): Bad file descriptor 00:22:10.119 [2024-07-24 23:22:11.475975] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.119 [2024-07-24 23:22:11.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x113b100 with addr=10.0.0.2, port=4421 00:22:10.119 [2024-07-24 23:22:11.476025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113b100 is same with the state(5) to be set 00:22:10.119 [2024-07-24 23:22:11.476074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113b100 (9): Bad file descriptor 00:22:10.119 [2024-07-24 23:22:11.476107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:10.119 [2024-07-24 23:22:11.476133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:10.119 [2024-07-24 23:22:11.476147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:10.119 [2024-07-24 23:22:11.476197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:10.119 [2024-07-24 23:22:11.476216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:10.119 [2024-07-24 23:22:21.535947] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:10.119 Received shutdown signal, test time was about 55.622552 seconds 00:22:10.119 00:22:10.119 Latency(us) 00:22:10.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.119 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.119 Verification LBA range: start 0x0 length 0x4000 00:22:10.119 Nvme0n1 : 55.62 7683.21 30.01 0.00 0.00 16626.51 1199.01 7015926.69 00:22:10.119 =================================================================================================================== 00:22:10.119 Total : 7683.21 30.01 0.00 0.00 16626.51 1199.01 7015926.69 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.119 rmmod nvme_tcp 00:22:10.119 rmmod nvme_fabrics 00:22:10.119 rmmod nvme_keyring 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81254 ']' 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81254 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81254 ']' 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81254 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81254 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.119 killing process with pid 81254 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81254' 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81254 00:22:10.119 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81254 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.377 23:22:32 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:10.636 00:22:10.636 real 1m1.915s 00:22:10.636 user 2m51.040s 00:22:10.636 sys 0m19.319s 00:22:10.636 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:10.636 23:22:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:10.636 ************************************ 00:22:10.636 END TEST nvmf_host_multipath 00:22:10.636 ************************************ 00:22:10.636 23:22:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:10.636 23:22:32 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:10.636 23:22:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:10.636 23:22:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.636 23:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:10.636 ************************************ 00:22:10.636 START TEST nvmf_timeout 00:22:10.636 ************************************ 00:22:10.636 23:22:32 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:10.636 * Looking for test storage... 00:22:10.636 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:10.636 23:22:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:10.636 23:22:32 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.636 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:10.637 Cannot find device "nvmf_tgt_br" 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:10.637 Cannot find device "nvmf_tgt_br2" 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:10.637 Cannot find device "nvmf_tgt_br" 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:10.637 Cannot find device "nvmf_tgt_br2" 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:22:10.637 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:10.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:10.895 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:10.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:22:10.895 00:22:10.895 --- 10.0.0.2 ping statistics --- 00:22:10.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.895 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:10.895 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:10.895 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:10.895 00:22:10.895 --- 10.0.0.3 ping statistics --- 00:22:10.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.895 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:10.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:10.895 00:22:10.895 --- 10.0.0.1 ping statistics --- 00:22:10.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.895 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:10.895 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82423 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82423 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82423 ']' 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.152 23:22:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:11.152 [2024-07-24 23:22:33.459324] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:11.152 [2024-07-24 23:22:33.459425] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.152 [2024-07-24 23:22:33.604161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:11.409 [2024-07-24 23:22:33.757999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.409 [2024-07-24 23:22:33.758076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.409 [2024-07-24 23:22:33.758092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.409 [2024-07-24 23:22:33.758103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.409 [2024-07-24 23:22:33.758112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.409 [2024-07-24 23:22:33.758591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.409 [2024-07-24 23:22:33.758630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.409 [2024-07-24 23:22:33.836194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:12.344 [2024-07-24 23:22:34.779090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.344 23:22:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:12.910 Malloc0 00:22:12.910 23:22:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.167 23:22:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:13.425 23:22:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.682 [2024-07-24 23:22:36.038290] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82478 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82478 /var/tmp/bdevperf.sock 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82478 ']' 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.682 23:22:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:13.682 [2024-07-24 23:22:36.110318] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:13.682 [2024-07-24 23:22:36.110400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82478 ] 00:22:13.940 [2024-07-24 23:22:36.245637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.940 [2024-07-24 23:22:36.386201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.198 [2024-07-24 23:22:36.453077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:14.765 23:22:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.765 23:22:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:14.765 23:22:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:15.023 23:22:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:15.281 NVMe0n1 00:22:15.281 23:22:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82496 00:22:15.281 23:22:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:15.281 23:22:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:15.540 Running I/O for 10 seconds... 00:22:16.595 23:22:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:16.595 [2024-07-24 23:22:38.969769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.595 [2024-07-24 23:22:38.969922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.969986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.596 [2024-07-24 23:22:38.970789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.970924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2156750 is same with the state(5) to be set 00:22:16.597 [2024-07-24 23:22:38.971001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.597 [2024-07-24 23:22:38.971791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.597 [2024-07-24 23:22:38.971801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.971981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.971991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.598 [2024-07-24 23:22:38.972719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.598 [2024-07-24 23:22:38.972728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.972988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.972997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.599 [2024-07-24 23:22:38.973635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.599 [2024-07-24 23:22:38.973651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.600 [2024-07-24 23:22:38.973660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.973981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:16.600 [2024-07-24 23:22:38.973990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.974007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.600 [2024-07-24 23:22:38.974016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.974026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2159820 is same with the state(5) to be set 00:22:16.600 [2024-07-24 23:22:38.974038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:16.600 [2024-07-24 23:22:38.974046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:16.600 [2024-07-24 23:22:38.974071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65752 len:8 PRP1 0x0 PRP2 0x0 00:22:16.600 [2024-07-24 23:22:38.974097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:16.600 [2024-07-24 23:22:38.974164] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2159820 was disconnected and freed. reset controller. 00:22:16.600 [2024-07-24 23:22:38.974492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.600 [2024-07-24 23:22:38.974577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e7d40 (9): Bad file descriptor 00:22:16.600 [2024-07-24 23:22:38.974713] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.600 [2024-07-24 23:22:38.974734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e7d40 with addr=10.0.0.2, port=4420 00:22:16.600 [2024-07-24 23:22:38.974745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e7d40 is same with the state(5) to be set 00:22:16.600 [2024-07-24 23:22:38.974764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e7d40 (9): Bad file descriptor 00:22:16.600 [2024-07-24 23:22:38.974780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.600 [2024-07-24 23:22:38.974790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.600 [2024-07-24 23:22:38.974801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.600 [2024-07-24 23:22:38.974820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.600 [2024-07-24 23:22:38.974830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:16.600 23:22:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:18.502 [2024-07-24 23:22:40.975185] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.502 [2024-07-24 23:22:40.975278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e7d40 with addr=10.0.0.2, port=4420 00:22:18.502 [2024-07-24 23:22:40.975296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e7d40 is same with the state(5) to be set 00:22:18.502 [2024-07-24 23:22:40.975341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e7d40 (9): Bad file descriptor 00:22:18.502 [2024-07-24 23:22:40.975365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:18.502 [2024-07-24 23:22:40.975376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:18.502 [2024-07-24 23:22:40.975389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:18.502 [2024-07-24 23:22:40.975422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.502 [2024-07-24 23:22:40.975436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.761 23:22:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:18.761 23:22:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.761 23:22:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:19.019 23:22:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:19.019 23:22:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:19.019 23:22:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:19.019 23:22:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:19.278 23:22:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:19.278 23:22:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:20.653 [2024-07-24 23:22:42.975644] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.653 [2024-07-24 23:22:42.975710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e7d40 with addr=10.0.0.2, port=4420 00:22:20.653 [2024-07-24 23:22:42.975727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e7d40 is same with the state(5) to be set 00:22:20.653 [2024-07-24 23:22:42.975758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e7d40 (9): Bad file descriptor 00:22:20.653 [2024-07-24 23:22:42.975793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:20.653 [2024-07-24 23:22:42.975807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:20.653 [2024-07-24 23:22:42.975820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.653 [2024-07-24 23:22:42.975853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:20.653 [2024-07-24 23:22:42.975867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.578 [2024-07-24 23:22:44.976050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.578 [2024-07-24 23:22:44.976152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.578 [2024-07-24 23:22:44.976167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:22.578 [2024-07-24 23:22:44.976180] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:22.578 [2024-07-24 23:22:44.976224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:23.513 00:22:23.513 Latency(us) 00:22:23.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.513 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.513 Verification LBA range: start 0x0 length 0x4000 00:22:23.513 NVMe0n1 : 8.15 995.33 3.89 15.72 0.00 126518.12 4259.84 7046430.72 00:22:23.513 =================================================================================================================== 00:22:23.513 Total : 995.33 3.89 15.72 0.00 126518.12 4259.84 7046430.72 00:22:23.513 0 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:24.449 23:22:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82496 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82478 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82478 ']' 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82478 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82478 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:24.708 killing process with pid 82478 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82478' 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82478 00:22:24.708 Received shutdown signal, test time was about 9.281358 seconds 00:22:24.708 00:22:24.708 Latency(us) 00:22:24.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.708 =================================================================================================================== 00:22:24.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.708 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82478 00:22:24.966 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.225 [2024-07-24 23:22:47.627764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82618 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82618 /var/tmp/bdevperf.sock 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82618 ']' 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.225 23:22:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:25.225 [2024-07-24 23:22:47.697714] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:25.225 [2024-07-24 23:22:47.697805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82618 ] 00:22:25.484 [2024-07-24 23:22:47.836415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.742 [2024-07-24 23:22:48.005837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.742 [2024-07-24 23:22:48.081530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:26.309 23:22:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.309 23:22:48 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:26.309 23:22:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:26.568 23:22:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:26.826 NVMe0n1 00:22:26.826 23:22:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82641 00:22:26.826 23:22:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:26.826 23:22:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.084 Running I/O for 10 seconds... 00:22:28.019 23:22:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.282 [2024-07-24 23:22:50.548675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.548976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.549004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.549014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.549023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.549031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.282 [2024-07-24 23:22:50.549040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354e40 is same with the state(5) to be set 00:22:28.283 [2024-07-24 23:22:50.549793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.283 [2024-07-24 23:22:50.549826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.283 [2024-07-24 23:22:50.549851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.283 [2024-07-24 23:22:50.549863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.283 [2024-07-24 23:22:50.549876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.549886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.549897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.549908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.549920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.549930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.549942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.549952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.549963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.549973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.549985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.549994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.284 [2024-07-24 23:22:50.550679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.284 [2024-07-24 23:22:50.550691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.550982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.550993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.285 [2024-07-24 23:22:50.551542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.285 [2024-07-24 23:22:50.551552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.551974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.551989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.286 [2024-07-24 23:22:50.552318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.286 [2024-07-24 23:22:50.552349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.286 [2024-07-24 23:22:50.552371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.286 [2024-07-24 23:22:50.552391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.286 [2024-07-24 23:22:50.552403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.286 [2024-07-24 23:22:50.552412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.287 [2024-07-24 23:22:50.552633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.287 [2024-07-24 23:22:50.552653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241e820 is same with the state(5) to be set 00:22:28.287 [2024-07-24 23:22:50.552692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:28.287 [2024-07-24 23:22:50.552700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:28.287 [2024-07-24 23:22:50.552710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:22:28.287 [2024-07-24 23:22:50.552720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.287 [2024-07-24 23:22:50.552785] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x241e820 was disconnected and freed. reset controller. 00:22:28.287 [2024-07-24 23:22:50.553044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.287 [2024-07-24 23:22:50.553125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:28.287 [2024-07-24 23:22:50.553264] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.287 [2024-07-24 23:22:50.553286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23acd40 with addr=10.0.0.2, port=4420 00:22:28.287 [2024-07-24 23:22:50.553298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:28.287 [2024-07-24 23:22:50.553316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:28.287 [2024-07-24 23:22:50.553333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.287 [2024-07-24 23:22:50.553343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:28.287 [2024-07-24 23:22:50.553356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.287 [2024-07-24 23:22:50.553376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.287 [2024-07-24 23:22:50.553388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.287 23:22:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:29.258 [2024-07-24 23:22:51.553570] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.258 [2024-07-24 23:22:51.553649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23acd40 with addr=10.0.0.2, port=4420 00:22:29.258 [2024-07-24 23:22:51.553667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:29.258 [2024-07-24 23:22:51.553699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:29.258 [2024-07-24 23:22:51.553721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:29.258 [2024-07-24 23:22:51.553732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:29.258 [2024-07-24 23:22:51.553744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:29.258 [2024-07-24 23:22:51.553776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:29.258 [2024-07-24 23:22:51.553790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:29.258 23:22:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.517 [2024-07-24 23:22:51.799423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.517 23:22:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82641 00:22:30.084 [2024-07-24 23:22:52.567203] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:38.194 00:22:38.194 Latency(us) 00:22:38.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.195 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:38.195 Verification LBA range: start 0x0 length 0x4000 00:22:38.195 NVMe0n1 : 10.01 6261.81 24.46 0.00 0.00 20408.81 1422.43 3035150.89 00:22:38.195 =================================================================================================================== 00:22:38.195 Total : 6261.81 24.46 0.00 0.00 20408.81 1422.43 3035150.89 00:22:38.195 0 00:22:38.195 23:22:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82746 00:22:38.195 23:22:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:38.195 23:22:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:38.195 Running I/O for 10 seconds... 00:22:38.195 23:23:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.455 [2024-07-24 23:23:00.724657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.455 [2024-07-24 23:23:00.724884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.455 [2024-07-24 23:23:00.724893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.724903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.724911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.724922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.724941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.724953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.724963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.724971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.724982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.724991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.456 [2024-07-24 23:23:00.725389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.456 [2024-07-24 23:23:00.725718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.456 [2024-07-24 23:23:00.725730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.725879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.725899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.725919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.725941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.725961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.725982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.725993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.457 [2024-07-24 23:23:00.726383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.457 [2024-07-24 23:23:00.726519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.457 [2024-07-24 23:23:00.726530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.458 [2024-07-24 23:23:00.726538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.726987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.726996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.458 [2024-07-24 23:23:00.727016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24301a0 is same with the state(5) to be set 00:22:38.458 [2024-07-24 23:23:00.727038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67896 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68224 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68232 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68240 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68248 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68256 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68264 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68272 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.458 [2024-07-24 23:23:00.727365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.458 [2024-07-24 23:23:00.727373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68280 len:8 PRP1 0x0 PRP2 0x0 00:22:38.458 [2024-07-24 23:23:00.727381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.458 [2024-07-24 23:23:00.727391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.727398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.727406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68288 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.727414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.727423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.727431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.727438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68296 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.727446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.727455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.727462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.727470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68304 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.727478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.727487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.727494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.727502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68312 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.727511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.727520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.727533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.727561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68320 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.727576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.737165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.737177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68328 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.737188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.737206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.737215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68336 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.737224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:38.459 [2024-07-24 23:23:00.737240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:38.459 [2024-07-24 23:23:00.737249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68344 len:8 PRP1 0x0 PRP2 0x0 00:22:38.459 [2024-07-24 23:23:00.737258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737333] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24301a0 was disconnected and freed. reset controller. 00:22:38.459 [2024-07-24 23:23:00.737442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.459 [2024-07-24 23:23:00.737460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.459 [2024-07-24 23:23:00.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.459 [2024-07-24 23:23:00.737501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.459 [2024-07-24 23:23:00.737520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.459 [2024-07-24 23:23:00.737529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:38.459 [2024-07-24 23:23:00.737738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.459 [2024-07-24 23:23:00.737760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:38.459 [2024-07-24 23:23:00.737872] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.459 [2024-07-24 23:23:00.737894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23acd40 with addr=10.0.0.2, port=4420 00:22:38.459 [2024-07-24 23:23:00.737905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:38.459 [2024-07-24 23:23:00.737935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:38.459 [2024-07-24 23:23:00.737952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:38.459 [2024-07-24 23:23:00.737964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:38.459 [2024-07-24 23:23:00.737976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:38.459 [2024-07-24 23:23:00.737997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.459 [2024-07-24 23:23:00.738008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:38.459 23:23:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:39.394 [2024-07-24 23:23:01.738195] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.394 [2024-07-24 23:23:01.738258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23acd40 with addr=10.0.0.2, port=4420 00:22:39.394 [2024-07-24 23:23:01.738276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:39.394 [2024-07-24 23:23:01.738305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:39.394 [2024-07-24 23:23:01.738326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:39.394 [2024-07-24 23:23:01.738336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:39.394 [2024-07-24 23:23:01.738349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.395 [2024-07-24 23:23:01.738381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.395 [2024-07-24 23:23:01.738393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.329 [2024-07-24 23:23:02.738599] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.329 [2024-07-24 23:23:02.738673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23acd40 with addr=10.0.0.2, port=4420 00:22:40.329 [2024-07-24 23:23:02.738692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:40.329 [2024-07-24 23:23:02.738740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:40.329 [2024-07-24 23:23:02.738762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:40.329 [2024-07-24 23:23:02.738773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:40.329 [2024-07-24 23:23:02.738786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.329 [2024-07-24 23:23:02.738819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.329 [2024-07-24 23:23:02.738833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:41.261 [2024-07-24 23:23:03.742568] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.261 [2024-07-24 23:23:03.742647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23acd40 with addr=10.0.0.2, port=4420 00:22:41.261 [2024-07-24 23:23:03.742667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23acd40 is same with the state(5) to be set 00:22:41.261 [2024-07-24 23:23:03.742924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23acd40 (9): Bad file descriptor 00:22:41.261 [2024-07-24 23:23:03.743188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:41.261 [2024-07-24 23:23:03.743204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:41.261 [2024-07-24 23:23:03.743217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:41.261 23:23:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.519 [2024-07-24 23:23:03.747135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:41.519 [2024-07-24 23:23:03.747188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:41.777 [2024-07-24 23:23:04.056070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.777 23:23:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82746 00:22:42.345 [2024-07-24 23:23:04.782622] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:47.615 00:22:47.615 Latency(us) 00:22:47.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.615 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.615 Verification LBA range: start 0x0 length 0x4000 00:22:47.615 NVMe0n1 : 10.01 5412.22 21.14 3804.89 0.00 13860.56 606.95 3035150.89 00:22:47.615 =================================================================================================================== 00:22:47.615 Total : 5412.22 21.14 3804.89 0.00 13860.56 0.00 3035150.89 00:22:47.615 0 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82618 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82618 ']' 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82618 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82618 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:47.615 killing process with pid 82618 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82618' 00:22:47.615 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.615 00:22:47.615 Latency(us) 00:22:47.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.615 =================================================================================================================== 00:22:47.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82618 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82618 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82861 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82861 /var/tmp/bdevperf.sock 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82861 ']' 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.615 23:23:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:47.615 [2024-07-24 23:23:10.010002] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:47.616 [2024-07-24 23:23:10.010163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82861 ] 00:22:47.875 [2024-07-24 23:23:10.146030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.875 [2024-07-24 23:23:10.293231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.134 [2024-07-24 23:23:10.370873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:48.700 23:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.701 23:23:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:22:48.701 23:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82877 00:22:48.701 23:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82861 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:48.701 23:23:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:48.958 23:23:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:49.216 NVMe0n1 00:22:49.216 23:23:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82917 00:22:49.216 23:23:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.216 23:23:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:49.474 Running I/O for 10 seconds... 00:22:50.406 23:23:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.665 [2024-07-24 23:23:12.901998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.665 [2024-07-24 23:23:12.902444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.665 [2024-07-24 23:23:12.902453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.902981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.902990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.666 [2024-07-24 23:23:12.903275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.666 [2024-07-24 23:23:12.903287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.903984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.903994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.904006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.904026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.904037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.904047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.904058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.904068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.904079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.904089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.904101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.667 [2024-07-24 23:23:12.904110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.667 [2024-07-24 23:23:12.904122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.668 [2024-07-24 23:23:12.904862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176790 is same with the state(5) to be set 00:22:50.668 [2024-07-24 23:23:12.904886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:50.668 [2024-07-24 23:23:12.904894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:50.668 [2024-07-24 23:23:12.904903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34640 len:8 PRP1 0x0 PRP2 0x0 00:22:50.668 [2024-07-24 23:23:12.904912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.904979] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1176790 was disconnected and freed. reset controller. 00:22:50.668 [2024-07-24 23:23:12.905062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.668 [2024-07-24 23:23:12.905079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.668 [2024-07-24 23:23:12.905091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.669 [2024-07-24 23:23:12.905101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.669 [2024-07-24 23:23:12.905111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.669 [2024-07-24 23:23:12.905121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.669 [2024-07-24 23:23:12.905145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.669 [2024-07-24 23:23:12.905157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.669 [2024-07-24 23:23:12.905166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1104c00 is same with the state(5) to be set 00:22:50.669 [2024-07-24 23:23:12.905414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.669 [2024-07-24 23:23:12.905446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1104c00 (9): Bad file descriptor 00:22:50.669 [2024-07-24 23:23:12.905558] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.669 [2024-07-24 23:23:12.905590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1104c00 with addr=10.0.0.2, port=4420 00:22:50.669 [2024-07-24 23:23:12.905603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1104c00 is same with the state(5) to be set 00:22:50.669 [2024-07-24 23:23:12.905623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1104c00 (9): Bad file descriptor 00:22:50.669 [2024-07-24 23:23:12.905640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.669 [2024-07-24 23:23:12.905650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.669 [2024-07-24 23:23:12.905669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.669 [2024-07-24 23:23:12.905691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.669 [2024-07-24 23:23:12.905704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.669 23:23:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82917 00:22:52.565 [2024-07-24 23:23:14.906037] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.565 [2024-07-24 23:23:14.906118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1104c00 with addr=10.0.0.2, port=4420 00:22:52.565 [2024-07-24 23:23:14.906149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1104c00 is same with the state(5) to be set 00:22:52.565 [2024-07-24 23:23:14.906183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1104c00 (9): Bad file descriptor 00:22:52.565 [2024-07-24 23:23:14.906204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.565 [2024-07-24 23:23:14.906216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.565 [2024-07-24 23:23:14.906229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.565 [2024-07-24 23:23:14.906261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.565 [2024-07-24 23:23:14.906273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.533 [2024-07-24 23:23:16.906599] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.533 [2024-07-24 23:23:16.906679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1104c00 with addr=10.0.0.2, port=4420 00:22:54.533 [2024-07-24 23:23:16.906698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1104c00 is same with the state(5) to be set 00:22:54.533 [2024-07-24 23:23:16.906731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1104c00 (9): Bad file descriptor 00:22:54.533 [2024-07-24 23:23:16.906752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.533 [2024-07-24 23:23:16.906764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.533 [2024-07-24 23:23:16.906776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.533 [2024-07-24 23:23:16.906810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:54.533 [2024-07-24 23:23:16.906823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.433 [2024-07-24 23:23:18.906997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.433 [2024-07-24 23:23:18.907071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:56.433 [2024-07-24 23:23:18.907084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:56.433 [2024-07-24 23:23:18.907097] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:56.433 [2024-07-24 23:23:18.907140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:57.808 00:22:57.808 Latency(us) 00:22:57.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.808 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:57.808 NVMe0n1 : 8.15 2182.05 8.52 15.71 0.00 58190.74 7238.75 7015926.69 00:22:57.808 =================================================================================================================== 00:22:57.808 Total : 2182.05 8.52 15.71 0.00 58190.74 7238.75 7015926.69 00:22:57.808 0 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:57.808 Attaching 5 probes... 00:22:57.808 1383.794778: reset bdev controller NVMe0 00:22:57.808 1383.875391: reconnect bdev controller NVMe0 00:22:57.808 3384.267007: reconnect delay bdev controller NVMe0 00:22:57.808 3384.294786: reconnect bdev controller NVMe0 00:22:57.808 5384.806577: reconnect delay bdev controller NVMe0 00:22:57.808 5384.832890: reconnect bdev controller NVMe0 00:22:57.808 7385.341093: reconnect delay bdev controller NVMe0 00:22:57.808 7385.366576: reconnect bdev controller NVMe0 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82877 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82861 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82861 ']' 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82861 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82861 00:22:57.808 killing process with pid 82861 00:22:57.808 Received shutdown signal, test time was about 8.205851 seconds 00:22:57.808 00:22:57.808 Latency(us) 00:22:57.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.808 =================================================================================================================== 00:22:57.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82861' 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82861 00:22:57.808 23:23:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82861 00:22:57.808 23:23:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:58.374 rmmod nvme_tcp 00:22:58.374 rmmod nvme_fabrics 00:22:58.374 rmmod nvme_keyring 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82423 ']' 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82423 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82423 ']' 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82423 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82423 00:22:58.374 killing process with pid 82423 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82423' 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82423 00:22:58.374 23:23:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82423 00:22:58.939 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:58.939 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:58.939 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:58.940 00:22:58.940 real 0m48.260s 00:22:58.940 user 2m21.554s 00:22:58.940 sys 0m5.995s 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.940 23:23:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:58.940 ************************************ 00:22:58.940 END TEST nvmf_timeout 00:22:58.940 ************************************ 00:22:58.940 23:23:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:58.940 23:23:21 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:22:58.940 23:23:21 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:22:58.940 23:23:21 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.940 23:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.940 23:23:21 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:22:58.940 00:22:58.940 real 12m24.041s 00:22:58.940 user 30m3.845s 00:22:58.940 sys 3m10.999s 00:22:58.940 23:23:21 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.940 ************************************ 00:22:58.940 23:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:58.940 END TEST nvmf_tcp 00:22:58.940 ************************************ 00:22:58.940 23:23:21 -- common/autotest_common.sh@1142 -- # return 0 00:22:58.940 23:23:21 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:22:58.940 23:23:21 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:58.940 23:23:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:58.940 23:23:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.940 23:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:58.940 ************************************ 00:22:58.940 START TEST nvmf_dif 00:22:58.940 ************************************ 00:22:58.940 23:23:21 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:58.940 * Looking for test storage... 00:22:58.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:58.940 23:23:21 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.940 23:23:21 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.940 23:23:21 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.940 23:23:21 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.940 23:23:21 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.940 23:23:21 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.940 23:23:21 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.940 23:23:21 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:58.940 23:23:21 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.940 23:23:21 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:58.940 23:23:21 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:58.940 23:23:21 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:58.940 23:23:21 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:58.940 23:23:21 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.940 23:23:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:58.940 23:23:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:58.940 Cannot find device "nvmf_tgt_br" 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.940 Cannot find device "nvmf_tgt_br2" 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:58.940 23:23:21 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:59.206 Cannot find device "nvmf_tgt_br" 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:59.206 Cannot find device "nvmf_tgt_br2" 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:59.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:59.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:59.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:22:59.206 00:22:59.206 --- 10.0.0.2 ping statistics --- 00:22:59.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.206 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:59.206 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:59.206 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:22:59.206 00:22:59.206 --- 10.0.0.3 ping statistics --- 00:22:59.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.206 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:59.206 23:23:21 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:59.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:59.490 00:22:59.490 --- 10.0.0.1 ping statistics --- 00:22:59.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.490 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:59.490 23:23:21 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.490 23:23:21 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:59.490 23:23:21 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:59.490 23:23:21 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:59.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:59.776 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:59.776 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.776 23:23:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:59.776 23:23:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:59.776 23:23:22 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.776 23:23:22 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.776 23:23:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:59.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.777 23:23:22 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83357 00:22:59.777 23:23:22 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:59.777 23:23:22 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83357 00:22:59.777 23:23:22 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83357 ']' 00:22:59.777 23:23:22 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.777 23:23:22 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.777 23:23:22 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.777 23:23:22 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.777 23:23:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:59.777 [2024-07-24 23:23:22.126773] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:22:59.777 [2024-07-24 23:23:22.126923] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.034 [2024-07-24 23:23:22.274014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.034 [2024-07-24 23:23:22.459768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.034 [2024-07-24 23:23:22.460329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.034 [2024-07-24 23:23:22.460617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.034 [2024-07-24 23:23:22.460948] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.034 [2024-07-24 23:23:22.461086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.034 [2024-07-24 23:23:22.461283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.291 [2024-07-24 23:23:22.541529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:23:00.857 23:23:23 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:00.857 23:23:23 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.857 23:23:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:00.857 23:23:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:00.857 [2024-07-24 23:23:23.234389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.857 23:23:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:00.857 23:23:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:00.857 ************************************ 00:23:00.857 START TEST fio_dif_1_default 00:23:00.857 ************************************ 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.857 bdev_null0 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:00.857 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:00.858 [2024-07-24 23:23:23.278544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.858 { 00:23:00.858 "params": { 00:23:00.858 "name": "Nvme$subsystem", 00:23:00.858 "trtype": "$TEST_TRANSPORT", 00:23:00.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.858 "adrfam": "ipv4", 00:23:00.858 "trsvcid": "$NVMF_PORT", 00:23:00.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.858 "hdgst": ${hdgst:-false}, 00:23:00.858 "ddgst": ${ddgst:-false} 00:23:00.858 }, 00:23:00.858 "method": "bdev_nvme_attach_controller" 00:23:00.858 } 00:23:00.858 EOF 00:23:00.858 )") 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:00.858 "params": { 00:23:00.858 "name": "Nvme0", 00:23:00.858 "trtype": "tcp", 00:23:00.858 "traddr": "10.0.0.2", 00:23:00.858 "adrfam": "ipv4", 00:23:00.858 "trsvcid": "4420", 00:23:00.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:00.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:00.858 "hdgst": false, 00:23:00.858 "ddgst": false 00:23:00.858 }, 00:23:00.858 "method": "bdev_nvme_attach_controller" 00:23:00.858 }' 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:00.858 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:01.115 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:01.115 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:01.115 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:01.115 23:23:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.115 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:01.115 fio-3.35 00:23:01.115 Starting 1 thread 00:23:13.313 00:23:13.313 filename0: (groupid=0, jobs=1): err= 0: pid=83418: Wed Jul 24 23:23:34 2024 00:23:13.313 read: IOPS=8201, BW=32.0MiB/s (33.6MB/s)(320MiB/10001msec) 00:23:13.313 slat (nsec): min=6691, max=52007, avg=9061.94, stdev=2865.28 00:23:13.313 clat (usec): min=367, max=2570, avg=460.82, stdev=38.96 00:23:13.313 lat (usec): min=374, max=2604, avg=469.88, stdev=39.56 00:23:13.313 clat percentiles (usec): 00:23:13.313 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 433], 00:23:13.313 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 465], 00:23:13.313 | 70.00th=[ 474], 80.00th=[ 486], 90.00th=[ 502], 95.00th=[ 519], 00:23:13.314 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 668], 99.95th=[ 693], 00:23:13.314 | 99.99th=[ 1221] 00:23:13.314 bw ( KiB/s): min=31168, max=33344, per=100.00%, avg=32823.58, stdev=502.90, samples=19 00:23:13.314 iops : min= 7792, max= 8336, avg=8205.89, stdev=125.73, samples=19 00:23:13.314 lat (usec) : 500=89.43%, 750=10.55%, 1000=0.01% 00:23:13.314 lat (msec) : 2=0.01%, 4=0.01% 00:23:13.314 cpu : usr=81.61%, sys=16.20%, ctx=31, majf=0, minf=0 00:23:13.314 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:13.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.314 issued rwts: total=82024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.314 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:13.314 00:23:13.314 Run status group 0 (all jobs): 00:23:13.314 READ: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=320MiB (336MB), run=10001-10001msec 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 ************************************ 00:23:13.314 END TEST fio_dif_1_default 00:23:13.314 ************************************ 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 00:23:13.314 real 0m11.110s 00:23:13.314 user 0m8.859s 00:23:13.314 sys 0m1.928s 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:13.314 23:23:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:13.314 23:23:34 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:13.314 23:23:34 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 ************************************ 00:23:13.314 START TEST fio_dif_1_multi_subsystems 00:23:13.314 ************************************ 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 bdev_null0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 [2024-07-24 23:23:34.442412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 bdev_null1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.314 { 00:23:13.314 "params": { 00:23:13.314 "name": "Nvme$subsystem", 00:23:13.314 "trtype": "$TEST_TRANSPORT", 00:23:13.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.314 "adrfam": "ipv4", 00:23:13.314 "trsvcid": "$NVMF_PORT", 00:23:13.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.314 "hdgst": ${hdgst:-false}, 00:23:13.314 "ddgst": ${ddgst:-false} 00:23:13.314 }, 00:23:13.314 "method": "bdev_nvme_attach_controller" 00:23:13.314 } 00:23:13.314 EOF 00:23:13.314 )") 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:13.314 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.315 { 00:23:13.315 "params": { 00:23:13.315 "name": "Nvme$subsystem", 00:23:13.315 "trtype": "$TEST_TRANSPORT", 00:23:13.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.315 "adrfam": "ipv4", 00:23:13.315 "trsvcid": "$NVMF_PORT", 00:23:13.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.315 "hdgst": ${hdgst:-false}, 00:23:13.315 "ddgst": ${ddgst:-false} 00:23:13.315 }, 00:23:13.315 "method": "bdev_nvme_attach_controller" 00:23:13.315 } 00:23:13.315 EOF 00:23:13.315 )") 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:13.315 "params": { 00:23:13.315 "name": "Nvme0", 00:23:13.315 "trtype": "tcp", 00:23:13.315 "traddr": "10.0.0.2", 00:23:13.315 "adrfam": "ipv4", 00:23:13.315 "trsvcid": "4420", 00:23:13.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:13.315 "hdgst": false, 00:23:13.315 "ddgst": false 00:23:13.315 }, 00:23:13.315 "method": "bdev_nvme_attach_controller" 00:23:13.315 },{ 00:23:13.315 "params": { 00:23:13.315 "name": "Nvme1", 00:23:13.315 "trtype": "tcp", 00:23:13.315 "traddr": "10.0.0.2", 00:23:13.315 "adrfam": "ipv4", 00:23:13.315 "trsvcid": "4420", 00:23:13.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.315 "hdgst": false, 00:23:13.315 "ddgst": false 00:23:13.315 }, 00:23:13.315 "method": "bdev_nvme_attach_controller" 00:23:13.315 }' 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:13.315 23:23:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.315 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:13.315 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:13.315 fio-3.35 00:23:13.315 Starting 2 threads 00:23:23.331 00:23:23.331 filename0: (groupid=0, jobs=1): err= 0: pid=83577: Wed Jul 24 23:23:45 2024 00:23:23.331 read: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:23:23.331 slat (nsec): min=6908, max=81087, avg=14949.77, stdev=5872.73 00:23:23.331 clat (usec): min=465, max=3306, avg=808.69, stdev=55.45 00:23:23.331 lat (usec): min=475, max=3339, avg=823.64, stdev=58.07 00:23:23.331 clat percentiles (usec): 00:23:23.331 | 1.00th=[ 693], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 775], 00:23:23.332 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 816], 00:23:23.332 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 914], 00:23:23.332 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1057], 00:23:23.332 | 99.99th=[ 1172] 00:23:23.332 bw ( KiB/s): min=16766, max=19456, per=49.95%, avg=18844.53, stdev=902.77, samples=19 00:23:23.332 iops : min= 4191, max= 4864, avg=4711.11, stdev=225.76, samples=19 00:23:23.332 lat (usec) : 500=0.01%, 750=7.34%, 1000=92.38% 00:23:23.332 lat (msec) : 2=0.27%, 4=0.01% 00:23:23.332 cpu : usr=90.08%, sys=8.64%, ctx=12, majf=0, minf=9 00:23:23.332 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.332 issued rwts: total=47156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.332 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:23.332 filename1: (groupid=0, jobs=1): err= 0: pid=83578: Wed Jul 24 23:23:45 2024 00:23:23.332 read: IOPS=4716, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:23:23.332 slat (usec): min=6, max=138, avg=15.29, stdev= 5.69 00:23:23.332 clat (usec): min=363, max=1498, avg=805.67, stdev=47.47 00:23:23.332 lat (usec): min=370, max=1546, avg=820.96, stdev=50.50 00:23:23.332 clat percentiles (usec): 00:23:23.332 | 1.00th=[ 717], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 775], 00:23:23.332 | 30.00th=[ 783], 40.00th=[ 791], 50.00th=[ 799], 60.00th=[ 807], 00:23:23.332 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 898], 00:23:23.332 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1074], 00:23:23.332 | 99.99th=[ 1467] 00:23:23.332 bw ( KiB/s): min=16800, max=19456, per=49.96%, avg=18846.32, stdev=898.44, samples=19 00:23:23.332 iops : min= 4200, max= 4864, avg=4711.58, stdev=224.61, samples=19 00:23:23.332 lat (usec) : 500=0.03%, 750=5.26%, 1000=94.52% 00:23:23.332 lat (msec) : 2=0.18% 00:23:23.332 cpu : usr=90.89%, sys=7.73%, ctx=80, majf=0, minf=0 00:23:23.332 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:23.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:23.332 issued rwts: total=47168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:23.332 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:23.332 00:23:23.332 Run status group 0 (all jobs): 00:23:23.332 READ: bw=36.8MiB/s (38.6MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=368MiB (386MB), run=10001-10001msec 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 00:23:23.332 real 0m11.211s 00:23:23.332 user 0m18.882s 00:23:23.332 sys 0m1.966s 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:23.332 ************************************ 00:23:23.332 END TEST fio_dif_1_multi_subsystems 00:23:23.332 ************************************ 00:23:23.332 23:23:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:23.332 23:23:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:23.332 23:23:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:23.332 23:23:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 ************************************ 00:23:23.332 START TEST fio_dif_rand_params 00:23:23.332 ************************************ 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 bdev_null0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:23.332 [2024-07-24 23:23:45.707682] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.332 { 00:23:23.332 "params": { 00:23:23.332 "name": "Nvme$subsystem", 00:23:23.332 "trtype": "$TEST_TRANSPORT", 00:23:23.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.332 "adrfam": "ipv4", 00:23:23.332 "trsvcid": "$NVMF_PORT", 00:23:23.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.332 "hdgst": ${hdgst:-false}, 00:23:23.332 "ddgst": ${ddgst:-false} 00:23:23.332 }, 00:23:23.332 "method": "bdev_nvme_attach_controller" 00:23:23.332 } 00:23:23.332 EOF 00:23:23.332 )") 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:23.332 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:23.333 "params": { 00:23:23.333 "name": "Nvme0", 00:23:23.333 "trtype": "tcp", 00:23:23.333 "traddr": "10.0.0.2", 00:23:23.333 "adrfam": "ipv4", 00:23:23.333 "trsvcid": "4420", 00:23:23.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:23.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:23.333 "hdgst": false, 00:23:23.333 "ddgst": false 00:23:23.333 }, 00:23:23.333 "method": "bdev_nvme_attach_controller" 00:23:23.333 }' 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:23.333 23:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:23.591 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:23.591 ... 00:23:23.591 fio-3.35 00:23:23.591 Starting 3 threads 00:23:30.161 00:23:30.161 filename0: (groupid=0, jobs=1): err= 0: pid=83734: Wed Jul 24 23:23:51 2024 00:23:30.161 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5006msec) 00:23:30.161 slat (nsec): min=6587, max=48798, avg=12386.90, stdev=4208.49 00:23:30.161 clat (usec): min=10322, max=13060, avg=11693.36, stdev=434.28 00:23:30.161 lat (usec): min=10329, max=13074, avg=11705.75, stdev=434.35 00:23:30.161 clat percentiles (usec): 00:23:30.161 | 1.00th=[10683], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:23:30.161 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:23:30.161 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:23:30.161 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13042], 99.95th=[13042], 00:23:30.161 | 99.99th=[13042] 00:23:30.161 bw ( KiB/s): min=31488, max=33792, per=33.29%, avg=32710.30, stdev=653.08, samples=10 00:23:30.161 iops : min= 246, max= 264, avg=255.50, stdev= 5.15, samples=10 00:23:30.161 lat (msec) : 20=100.00% 00:23:30.161 cpu : usr=93.49%, sys=5.79%, ctx=55, majf=0, minf=9 00:23:30.161 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.161 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.161 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:30.161 filename0: (groupid=0, jobs=1): err= 0: pid=83735: Wed Jul 24 23:23:51 2024 00:23:30.161 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5007msec) 00:23:30.161 slat (nsec): min=6469, max=85251, avg=12055.09, stdev=6792.51 00:23:30.161 clat (usec): min=10402, max=13268, avg=11693.82, stdev=434.97 00:23:30.161 lat (usec): min=10409, max=13287, avg=11705.87, stdev=435.73 00:23:30.161 clat percentiles (usec): 00:23:30.161 | 1.00th=[10552], 5.00th=[10945], 10.00th=[11076], 20.00th=[11469], 00:23:30.161 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:23:30.161 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:23:30.161 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13304], 99.95th=[13304], 00:23:30.161 | 99.99th=[13304] 00:23:30.161 bw ( KiB/s): min=30720, max=34560, per=33.30%, avg=32716.80, stdev=971.45, samples=10 00:23:30.161 iops : min= 240, max= 270, avg=255.60, stdev= 7.59, samples=10 00:23:30.161 lat (msec) : 20=100.00% 00:23:30.161 cpu : usr=93.21%, sys=6.09%, ctx=11, majf=0, minf=9 00:23:30.161 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.161 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.161 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:30.161 filename0: (groupid=0, jobs=1): err= 0: pid=83736: Wed Jul 24 23:23:51 2024 00:23:30.161 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(160MiB/5007msec) 00:23:30.161 slat (nsec): min=7101, max=73633, avg=14391.08, stdev=7041.68 00:23:30.161 clat (usec): min=8746, max=13477, avg=11689.37, stdev=457.97 00:23:30.162 lat (usec): min=8755, max=13537, avg=11703.77, stdev=458.47 00:23:30.162 clat percentiles (usec): 00:23:30.162 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:23:30.162 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:23:30.162 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:23:30.162 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13435], 99.95th=[13435], 00:23:30.162 | 99.99th=[13435] 00:23:30.162 bw ( KiB/s): min=31425, max=33792, per=33.29%, avg=32710.50, stdev=753.73, samples=10 00:23:30.162 iops : min= 245, max= 264, avg=255.50, stdev= 5.99, samples=10 00:23:30.162 lat (msec) : 10=0.23%, 20=99.77% 00:23:30.162 cpu : usr=93.83%, sys=5.53%, ctx=10, majf=0, minf=9 00:23:30.162 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.162 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.162 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:30.162 00:23:30.162 Run status group 0 (all jobs): 00:23:30.162 READ: bw=95.9MiB/s (101MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=480MiB (504MB), run=5006-5007msec 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 bdev_null0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 [2024-07-24 23:23:51.722001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 bdev_null1 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 bdev_null2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.162 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.162 { 00:23:30.162 "params": { 00:23:30.162 "name": "Nvme$subsystem", 00:23:30.162 "trtype": "$TEST_TRANSPORT", 00:23:30.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.162 "adrfam": "ipv4", 00:23:30.163 "trsvcid": "$NVMF_PORT", 00:23:30.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.163 "hdgst": ${hdgst:-false}, 00:23:30.163 "ddgst": ${ddgst:-false} 00:23:30.163 }, 00:23:30.163 "method": "bdev_nvme_attach_controller" 00:23:30.163 } 00:23:30.163 EOF 00:23:30.163 )") 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.163 { 00:23:30.163 "params": { 00:23:30.163 "name": "Nvme$subsystem", 00:23:30.163 "trtype": "$TEST_TRANSPORT", 00:23:30.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.163 "adrfam": "ipv4", 00:23:30.163 "trsvcid": "$NVMF_PORT", 00:23:30.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.163 "hdgst": ${hdgst:-false}, 00:23:30.163 "ddgst": ${ddgst:-false} 00:23:30.163 }, 00:23:30.163 "method": "bdev_nvme_attach_controller" 00:23:30.163 } 00:23:30.163 EOF 00:23:30.163 )") 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:30.163 { 00:23:30.163 "params": { 00:23:30.163 "name": "Nvme$subsystem", 00:23:30.163 "trtype": "$TEST_TRANSPORT", 00:23:30.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.163 "adrfam": "ipv4", 00:23:30.163 "trsvcid": "$NVMF_PORT", 00:23:30.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.163 "hdgst": ${hdgst:-false}, 00:23:30.163 "ddgst": ${ddgst:-false} 00:23:30.163 }, 00:23:30.163 "method": "bdev_nvme_attach_controller" 00:23:30.163 } 00:23:30.163 EOF 00:23:30.163 )") 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:30.163 "params": { 00:23:30.163 "name": "Nvme0", 00:23:30.163 "trtype": "tcp", 00:23:30.163 "traddr": "10.0.0.2", 00:23:30.163 "adrfam": "ipv4", 00:23:30.163 "trsvcid": "4420", 00:23:30.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.163 "hdgst": false, 00:23:30.163 "ddgst": false 00:23:30.163 }, 00:23:30.163 "method": "bdev_nvme_attach_controller" 00:23:30.163 },{ 00:23:30.163 "params": { 00:23:30.163 "name": "Nvme1", 00:23:30.163 "trtype": "tcp", 00:23:30.163 "traddr": "10.0.0.2", 00:23:30.163 "adrfam": "ipv4", 00:23:30.163 "trsvcid": "4420", 00:23:30.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.163 "hdgst": false, 00:23:30.163 "ddgst": false 00:23:30.163 }, 00:23:30.163 "method": "bdev_nvme_attach_controller" 00:23:30.163 },{ 00:23:30.163 "params": { 00:23:30.163 "name": "Nvme2", 00:23:30.163 "trtype": "tcp", 00:23:30.163 "traddr": "10.0.0.2", 00:23:30.163 "adrfam": "ipv4", 00:23:30.163 "trsvcid": "4420", 00:23:30.163 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:30.163 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:30.163 "hdgst": false, 00:23:30.163 "ddgst": false 00:23:30.163 }, 00:23:30.163 "method": "bdev_nvme_attach_controller" 00:23:30.163 }' 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.163 23:23:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.163 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:30.163 ... 00:23:30.163 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:30.163 ... 00:23:30.163 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:30.163 ... 00:23:30.163 fio-3.35 00:23:30.163 Starting 24 threads 00:23:42.378 00:23:42.378 filename0: (groupid=0, jobs=1): err= 0: pid=83831: Wed Jul 24 23:24:02 2024 00:23:42.378 read: IOPS=189, BW=757KiB/s (775kB/s)(7608KiB/10048msec) 00:23:42.378 slat (usec): min=4, max=8033, avg=26.81, stdev=226.34 00:23:42.378 clat (msec): min=3, max=160, avg=84.28, stdev=28.26 00:23:42.378 lat (msec): min=3, max=160, avg=84.30, stdev=28.26 00:23:42.378 clat percentiles (msec): 00:23:42.378 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 53], 20.00th=[ 66], 00:23:42.378 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 92], 00:23:42.378 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 122], 00:23:42.378 | 99.00th=[ 129], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 161], 00:23:42.378 | 99.99th=[ 161] 00:23:42.378 bw ( KiB/s): min= 592, max= 1680, per=4.33%, avg=754.00, stdev=234.28, samples=20 00:23:42.379 iops : min= 148, max= 420, avg=188.45, stdev=58.57, samples=20 00:23:42.379 lat (msec) : 4=1.68%, 10=2.52%, 20=0.84%, 50=3.42%, 100=57.89% 00:23:42.379 lat (msec) : 250=33.65% 00:23:42.379 cpu : usr=39.99%, sys=1.54%, ctx=1321, majf=0, minf=0 00:23:42.379 IO depths : 1=0.3%, 2=0.7%, 4=2.1%, 8=80.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83832: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=189, BW=756KiB/s (774kB/s)(7604KiB/10055msec) 00:23:42.379 slat (usec): min=4, max=5035, avg=31.43, stdev=218.38 00:23:42.379 clat (msec): min=6, max=148, avg=84.38, stdev=26.09 00:23:42.379 lat (msec): min=6, max=148, avg=84.42, stdev=26.09 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 9], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 61], 00:23:42.379 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 92], 00:23:42.379 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 122], 00:23:42.379 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 148], 00:23:42.379 | 99.99th=[ 148] 00:23:42.379 bw ( KiB/s): min= 584, max= 1317, per=4.33%, avg=753.60, stdev=170.43, samples=20 00:23:42.379 iops : min= 146, max= 329, avg=188.35, stdev=42.60, samples=20 00:23:42.379 lat (msec) : 10=1.68%, 20=0.84%, 50=5.79%, 100=58.39%, 250=33.30% 00:23:42.379 cpu : usr=42.79%, sys=1.84%, ctx=1344, majf=0, minf=9 00:23:42.379 IO depths : 1=0.2%, 2=0.5%, 4=1.4%, 8=82.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83833: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=178, BW=714KiB/s (732kB/s)(7160KiB/10023msec) 00:23:42.379 slat (usec): min=3, max=4059, avg=28.80, stdev=165.13 00:23:42.379 clat (msec): min=28, max=151, avg=89.39, stdev=24.14 00:23:42.379 lat (msec): min=28, max=151, avg=89.42, stdev=24.15 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 55], 20.00th=[ 70], 00:23:42.379 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 101], 00:23:42.379 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 124], 00:23:42.379 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 153], 99.95th=[ 153], 00:23:42.379 | 99.99th=[ 153] 00:23:42.379 bw ( KiB/s): min= 512, max= 1024, per=4.10%, avg=712.00, stdev=141.33, samples=20 00:23:42.379 iops : min= 128, max= 256, avg=178.00, stdev=35.33, samples=20 00:23:42.379 lat (msec) : 50=4.64%, 100=54.86%, 250=40.50% 00:23:42.379 cpu : usr=40.87%, sys=1.84%, ctx=1298, majf=0, minf=9 00:23:42.379 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=88.9%, 8=9.4%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83834: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=180, BW=722KiB/s (739kB/s)(7252KiB/10047msec) 00:23:42.379 slat (usec): min=5, max=8077, avg=45.44, stdev=433.28 00:23:42.379 clat (msec): min=38, max=144, avg=88.37, stdev=22.24 00:23:42.379 lat (msec): min=38, max=144, avg=88.41, stdev=22.23 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 59], 20.00th=[ 72], 00:23:42.379 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 96], 00:23:42.379 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:23:42.379 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:23:42.379 | 99.99th=[ 144] 00:23:42.379 bw ( KiB/s): min= 584, max= 952, per=4.13%, avg=718.85, stdev=105.34, samples=20 00:23:42.379 iops : min= 146, max= 238, avg=179.70, stdev=26.31, samples=20 00:23:42.379 lat (msec) : 50=4.96%, 100=60.51%, 250=34.53% 00:23:42.379 cpu : usr=32.68%, sys=1.53%, ctx=909, majf=0, minf=9 00:23:42.379 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83835: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=179, BW=719KiB/s (737kB/s)(7220KiB/10035msec) 00:23:42.379 slat (usec): min=5, max=5015, avg=31.64, stdev=224.51 00:23:42.379 clat (msec): min=45, max=158, avg=88.69, stdev=22.29 00:23:42.379 lat (msec): min=45, max=158, avg=88.73, stdev=22.28 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 48], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 70], 00:23:42.379 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 97], 00:23:42.379 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 122], 00:23:42.379 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 159], 99.95th=[ 159], 00:23:42.379 | 99.99th=[ 159] 00:23:42.379 bw ( KiB/s): min= 544, max= 1000, per=4.13%, avg=717.80, stdev=123.55, samples=20 00:23:42.379 iops : min= 136, max= 250, avg=179.45, stdev=30.89, samples=20 00:23:42.379 lat (msec) : 50=3.27%, 100=58.56%, 250=38.17% 00:23:42.379 cpu : usr=40.62%, sys=1.52%, ctx=1161, majf=0, minf=9 00:23:42.379 IO depths : 1=0.1%, 2=1.3%, 4=5.4%, 8=77.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83836: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=183, BW=732KiB/s (750kB/s)(7340KiB/10025msec) 00:23:42.379 slat (usec): min=3, max=8049, avg=45.34, stdev=407.76 00:23:42.379 clat (msec): min=32, max=162, avg=87.17, stdev=24.21 00:23:42.379 lat (msec): min=32, max=162, avg=87.21, stdev=24.21 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 66], 00:23:42.379 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 96], 00:23:42.379 | 70.00th=[ 107], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 123], 00:23:42.379 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 163], 00:23:42.379 | 99.99th=[ 163] 00:23:42.379 bw ( KiB/s): min= 528, max= 1024, per=4.18%, avg=727.40, stdev=135.46, samples=20 00:23:42.379 iops : min= 132, max= 256, avg=181.85, stdev=33.86, samples=20 00:23:42.379 lat (msec) : 50=7.19%, 100=58.47%, 250=34.33% 00:23:42.379 cpu : usr=33.09%, sys=1.24%, ctx=933, majf=0, minf=9 00:23:42.379 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=88.4%, 8=10.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83837: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=171, BW=684KiB/s (701kB/s)(6852KiB/10012msec) 00:23:42.379 slat (usec): min=4, max=8045, avg=37.33, stdev=304.81 00:23:42.379 clat (msec): min=16, max=164, avg=93.28, stdev=26.24 00:23:42.379 lat (msec): min=16, max=164, avg=93.31, stdev=26.24 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 57], 20.00th=[ 70], 00:23:42.379 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 102], 60.00th=[ 108], 00:23:42.379 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 130], 00:23:42.379 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 165], 99.95th=[ 165], 00:23:42.379 | 99.99th=[ 165] 00:23:42.379 bw ( KiB/s): min= 512, max= 1024, per=3.85%, avg=669.32, stdev=157.69, samples=19 00:23:42.379 iops : min= 128, max= 256, avg=167.26, stdev=39.41, samples=19 00:23:42.379 lat (msec) : 20=0.35%, 50=6.13%, 100=43.02%, 250=50.50% 00:23:42.379 cpu : usr=39.30%, sys=1.56%, ctx=1209, majf=0, minf=9 00:23:42.379 IO depths : 1=0.1%, 2=3.3%, 4=13.0%, 8=69.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=90.6%, 8=6.6%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.379 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.379 filename0: (groupid=0, jobs=1): err= 0: pid=83838: Wed Jul 24 23:24:02 2024 00:23:42.379 read: IOPS=170, BW=683KiB/s (699kB/s)(6844KiB/10021msec) 00:23:42.379 slat (usec): min=4, max=8049, avg=51.65, stdev=483.99 00:23:42.379 clat (msec): min=28, max=167, avg=93.41, stdev=26.95 00:23:42.379 lat (msec): min=28, max=167, avg=93.46, stdev=26.96 00:23:42.379 clat percentiles (msec): 00:23:42.379 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 65], 00:23:42.379 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 100], 60.00th=[ 108], 00:23:42.379 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 121], 95.00th=[ 132], 00:23:42.379 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:23:42.379 | 99.99th=[ 167] 00:23:42.379 bw ( KiB/s): min= 512, max= 1024, per=3.82%, avg=664.84, stdev=163.01, samples=19 00:23:42.379 iops : min= 128, max= 256, avg=166.21, stdev=40.75, samples=19 00:23:42.379 lat (msec) : 50=7.66%, 100=43.89%, 250=48.45% 00:23:42.379 cpu : usr=32.57%, sys=1.57%, ctx=896, majf=0, minf=9 00:23:42.379 IO depths : 1=0.1%, 2=3.4%, 4=13.9%, 8=68.7%, 16=13.9%, 32=0.0%, >=64=0.0% 00:23:42.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.379 complete : 0=0.0%, 4=90.9%, 8=6.1%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83839: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=186, BW=744KiB/s (762kB/s)(7468KiB/10031msec) 00:23:42.380 slat (usec): min=4, max=8063, avg=50.80, stdev=432.56 00:23:42.380 clat (msec): min=30, max=132, avg=85.70, stdev=23.09 00:23:42.380 lat (msec): min=30, max=132, avg=85.75, stdev=23.08 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 67], 00:23:42.380 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 88], 00:23:42.380 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:23:42.380 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 133], 99.95th=[ 133], 00:23:42.380 | 99.99th=[ 133] 00:23:42.380 bw ( KiB/s): min= 640, max= 1024, per=4.27%, avg=742.05, stdev=111.16, samples=20 00:23:42.380 iops : min= 160, max= 256, avg=185.50, stdev=27.79, samples=20 00:23:42.380 lat (msec) : 50=7.77%, 100=59.51%, 250=32.73% 00:23:42.380 cpu : usr=38.19%, sys=1.64%, ctx=1096, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83840: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=172, BW=690KiB/s (706kB/s)(6920KiB/10030msec) 00:23:42.380 slat (usec): min=4, max=9035, avg=42.72, stdev=405.98 00:23:42.380 clat (msec): min=31, max=160, avg=92.47, stdev=27.00 00:23:42.380 lat (msec): min=31, max=160, avg=92.51, stdev=27.02 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 68], 00:23:42.380 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 96], 60.00th=[ 107], 00:23:42.380 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 133], 00:23:42.380 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 161], 00:23:42.380 | 99.99th=[ 161] 00:23:42.380 bw ( KiB/s): min= 400, max= 1024, per=3.95%, avg=687.30, stdev=173.01, samples=20 00:23:42.380 iops : min= 100, max= 256, avg=171.80, stdev=43.21, samples=20 00:23:42.380 lat (msec) : 50=5.66%, 100=49.54%, 250=44.80% 00:23:42.380 cpu : usr=36.28%, sys=1.25%, ctx=997, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=90.2%, 8=7.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83841: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=164, BW=659KiB/s (675kB/s)(6628KiB/10057msec) 00:23:42.380 slat (usec): min=3, max=8151, avg=30.85, stdev=301.48 00:23:42.380 clat (msec): min=6, max=170, avg=96.72, stdev=30.60 00:23:42.380 lat (msec): min=6, max=170, avg=96.75, stdev=30.60 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 72], 00:23:42.380 | 30.00th=[ 82], 40.00th=[ 94], 50.00th=[ 106], 60.00th=[ 110], 00:23:42.380 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 144], 00:23:42.380 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:23:42.380 | 99.99th=[ 171] 00:23:42.380 bw ( KiB/s): min= 496, max= 1258, per=3.77%, avg=655.85, stdev=196.62, samples=20 00:23:42.380 iops : min= 124, max= 314, avg=163.90, stdev=49.10, samples=20 00:23:42.380 lat (msec) : 10=1.93%, 20=0.97%, 50=4.28%, 100=37.30%, 250=55.52% 00:23:42.380 cpu : usr=40.36%, sys=1.90%, ctx=1240, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=4.5%, 4=17.8%, 8=63.9%, 16=13.7%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=92.4%, 8=3.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83842: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10044msec) 00:23:42.380 slat (usec): min=4, max=8027, avg=31.02, stdev=243.62 00:23:42.380 clat (msec): min=13, max=144, avg=84.36, stdev=23.44 00:23:42.380 lat (msec): min=13, max=144, avg=84.39, stdev=23.44 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 27], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 65], 00:23:42.380 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 86], 00:23:42.380 | 70.00th=[ 101], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 121], 00:23:42.380 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 146], 00:23:42.380 | 99.99th=[ 146] 00:23:42.380 bw ( KiB/s): min= 640, max= 976, per=4.33%, avg=753.60, stdev=109.74, samples=20 00:23:42.380 iops : min= 160, max= 244, avg=188.40, stdev=27.43, samples=20 00:23:42.380 lat (msec) : 20=0.84%, 50=6.74%, 100=62.32%, 250=30.11% 00:23:42.380 cpu : usr=40.76%, sys=1.64%, ctx=1164, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83843: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=185, BW=742KiB/s (760kB/s)(7452KiB/10045msec) 00:23:42.380 slat (usec): min=7, max=8048, avg=34.95, stdev=321.70 00:23:42.380 clat (msec): min=31, max=143, avg=86.00, stdev=22.13 00:23:42.380 lat (msec): min=31, max=143, avg=86.03, stdev=22.14 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 67], 00:23:42.380 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 87], 00:23:42.380 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:23:42.380 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 144], 00:23:42.380 | 99.99th=[ 144] 00:23:42.380 bw ( KiB/s): min= 584, max= 944, per=4.25%, avg=738.80, stdev=101.07, samples=20 00:23:42.380 iops : min= 146, max= 236, avg=184.70, stdev=25.27, samples=20 00:23:42.380 lat (msec) : 50=5.37%, 100=63.88%, 250=30.76% 00:23:42.380 cpu : usr=33.07%, sys=1.35%, ctx=925, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=81.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83844: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=177, BW=710KiB/s (727kB/s)(7132KiB/10043msec) 00:23:42.380 slat (usec): min=5, max=4071, avg=24.96, stdev=119.24 00:23:42.380 clat (msec): min=36, max=156, avg=89.85, stdev=24.58 00:23:42.380 lat (msec): min=36, max=156, avg=89.87, stdev=24.58 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 46], 5.00th=[ 52], 10.00th=[ 56], 20.00th=[ 68], 00:23:42.380 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 103], 00:23:42.380 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 124], 00:23:42.380 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:23:42.380 | 99.99th=[ 157] 00:23:42.380 bw ( KiB/s): min= 512, max= 1021, per=4.08%, avg=709.45, stdev=152.73, samples=20 00:23:42.380 iops : min= 128, max= 255, avg=177.35, stdev=38.16, samples=20 00:23:42.380 lat (msec) : 50=4.66%, 100=52.66%, 250=42.68% 00:23:42.380 cpu : usr=44.26%, sys=1.60%, ctx=1081, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=89.7%, 8=8.2%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83845: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=183, BW=735KiB/s (753kB/s)(7384KiB/10044msec) 00:23:42.380 slat (usec): min=4, max=8043, avg=25.58, stdev=187.06 00:23:42.380 clat (msec): min=19, max=154, avg=86.85, stdev=24.15 00:23:42.380 lat (msec): min=19, max=154, avg=86.87, stdev=24.15 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 68], 00:23:42.380 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 96], 00:23:42.380 | 70.00th=[ 107], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:23:42.380 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 155], 00:23:42.380 | 99.99th=[ 155] 00:23:42.380 bw ( KiB/s): min= 616, max= 1112, per=4.21%, avg=732.00, stdev=137.45, samples=20 00:23:42.380 iops : min= 154, max= 278, avg=183.00, stdev=34.36, samples=20 00:23:42.380 lat (msec) : 20=0.76%, 50=6.66%, 100=57.58%, 250=34.99% 00:23:42.380 cpu : usr=34.01%, sys=1.47%, ctx=1258, majf=0, minf=9 00:23:42.380 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:42.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.380 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.380 filename1: (groupid=0, jobs=1): err= 0: pid=83846: Wed Jul 24 23:24:02 2024 00:23:42.380 read: IOPS=179, BW=718KiB/s (735kB/s)(7208KiB/10036msec) 00:23:42.380 slat (usec): min=3, max=8039, avg=28.19, stdev=267.24 00:23:42.380 clat (msec): min=35, max=146, avg=88.90, stdev=21.69 00:23:42.380 lat (msec): min=35, max=146, avg=88.93, stdev=21.69 00:23:42.380 clat percentiles (msec): 00:23:42.380 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 72], 00:23:42.380 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 96], 00:23:42.380 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 146], 00:23:42.381 | 99.99th=[ 146] 00:23:42.381 bw ( KiB/s): min= 560, max= 920, per=4.12%, avg=716.50, stdev=105.20, samples=20 00:23:42.381 iops : min= 140, max= 230, avg=179.10, stdev=26.26, samples=20 00:23:42.381 lat (msec) : 50=4.00%, 100=62.10%, 250=33.91% 00:23:42.381 cpu : usr=32.30%, sys=1.53%, ctx=895, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83847: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=190, BW=762KiB/s (780kB/s)(7620KiB/10004msec) 00:23:42.381 slat (usec): min=4, max=8053, avg=33.37, stdev=318.31 00:23:42.381 clat (msec): min=16, max=144, avg=83.86, stdev=23.94 00:23:42.381 lat (msec): min=16, max=144, avg=83.89, stdev=23.93 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 62], 00:23:42.381 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:23:42.381 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:23:42.381 | 99.99th=[ 144] 00:23:42.381 bw ( KiB/s): min= 616, max= 1024, per=4.32%, avg=750.74, stdev=120.52, samples=19 00:23:42.381 iops : min= 154, max= 256, avg=187.63, stdev=30.13, samples=19 00:23:42.381 lat (msec) : 20=0.31%, 50=9.50%, 100=60.73%, 250=29.45% 00:23:42.381 cpu : usr=31.67%, sys=1.45%, ctx=904, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83848: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=159, BW=638KiB/s (653kB/s)(6404KiB/10036msec) 00:23:42.381 slat (usec): min=3, max=7245, avg=28.90, stdev=229.74 00:23:42.381 clat (msec): min=43, max=165, avg=99.99, stdev=26.06 00:23:42.381 lat (msec): min=43, max=165, avg=100.02, stdev=26.07 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 73], 00:23:42.381 | 30.00th=[ 84], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 112], 00:23:42.381 | 70.00th=[ 116], 80.00th=[ 121], 90.00th=[ 126], 95.00th=[ 142], 00:23:42.381 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 165], 00:23:42.381 | 99.99th=[ 165] 00:23:42.381 bw ( KiB/s): min= 384, max= 920, per=3.66%, avg=636.20, stdev=153.65, samples=20 00:23:42.381 iops : min= 96, max= 230, avg=159.05, stdev=38.41, samples=20 00:23:42.381 lat (msec) : 50=3.56%, 100=40.47%, 250=55.97% 00:23:42.381 cpu : usr=39.28%, sys=1.60%, ctx=1176, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=4.5%, 4=17.9%, 8=63.8%, 16=13.7%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=92.5%, 8=3.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83849: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10006msec) 00:23:42.381 slat (usec): min=4, max=8042, avg=46.68, stdev=389.94 00:23:42.381 clat (msec): min=16, max=133, avg=84.05, stdev=23.49 00:23:42.381 lat (msec): min=16, max=133, avg=84.10, stdev=23.50 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 64], 00:23:42.381 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 87], 00:23:42.381 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 117], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 134], 99.95th=[ 134], 00:23:42.381 | 99.99th=[ 134] 00:23:42.381 bw ( KiB/s): min= 616, max= 968, per=4.28%, avg=743.95, stdev=108.28, samples=19 00:23:42.381 iops : min= 154, max= 242, avg=185.95, stdev=27.08, samples=19 00:23:42.381 lat (msec) : 20=0.37%, 50=8.68%, 100=59.68%, 250=31.26% 00:23:42.381 cpu : usr=39.64%, sys=2.02%, ctx=1128, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83850: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=188, BW=754KiB/s (772kB/s)(7560KiB/10027msec) 00:23:42.381 slat (usec): min=5, max=12045, avg=47.50, stdev=407.34 00:23:42.381 clat (msec): min=26, max=132, avg=84.65, stdev=22.91 00:23:42.381 lat (msec): min=26, max=132, avg=84.70, stdev=22.92 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 36], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 67], 00:23:42.381 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 87], 00:23:42.381 | 70.00th=[ 100], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 133], 00:23:42.381 | 99.99th=[ 133] 00:23:42.381 bw ( KiB/s): min= 640, max= 992, per=4.31%, avg=749.20, stdev=112.34, samples=20 00:23:42.381 iops : min= 160, max= 248, avg=187.30, stdev=28.08, samples=20 00:23:42.381 lat (msec) : 50=6.56%, 100=63.97%, 250=29.47% 00:23:42.381 cpu : usr=42.61%, sys=1.59%, ctx=1380, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83851: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=188, BW=755KiB/s (774kB/s)(7560KiB/10008msec) 00:23:42.381 slat (usec): min=4, max=8058, avg=49.03, stdev=438.36 00:23:42.381 clat (msec): min=16, max=163, avg=84.49, stdev=23.46 00:23:42.381 lat (msec): min=16, max=163, avg=84.53, stdev=23.46 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 41], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 64], 00:23:42.381 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 86], 00:23:42.381 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 163], 99.95th=[ 163], 00:23:42.381 | 99.99th=[ 163] 00:23:42.381 bw ( KiB/s): min= 616, max= 1024, per=4.27%, avg=742.53, stdev=117.47, samples=19 00:23:42.381 iops : min= 154, max= 256, avg=185.58, stdev=29.38, samples=19 00:23:42.381 lat (msec) : 20=0.37%, 50=6.61%, 100=62.49%, 250=30.53% 00:23:42.381 cpu : usr=36.10%, sys=1.64%, ctx=1163, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83852: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=188, BW=752KiB/s (770kB/s)(7544KiB/10031msec) 00:23:42.381 slat (usec): min=5, max=8025, avg=27.73, stdev=206.77 00:23:42.381 clat (msec): min=31, max=132, avg=84.93, stdev=22.94 00:23:42.381 lat (msec): min=32, max=132, avg=84.96, stdev=22.93 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 65], 00:23:42.381 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 86], 00:23:42.381 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 133], 00:23:42.381 | 99.99th=[ 133] 00:23:42.381 bw ( KiB/s): min= 664, max= 1024, per=4.30%, avg=747.15, stdev=110.00, samples=20 00:23:42.381 iops : min= 166, max= 256, avg=186.75, stdev=27.44, samples=20 00:23:42.381 lat (msec) : 50=8.38%, 100=60.60%, 250=31.02% 00:23:42.381 cpu : usr=38.98%, sys=1.67%, ctx=1101, majf=0, minf=9 00:23:42.381 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.5%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.381 filename2: (groupid=0, jobs=1): err= 0: pid=83853: Wed Jul 24 23:24:02 2024 00:23:42.381 read: IOPS=190, BW=763KiB/s (781kB/s)(7660KiB/10041msec) 00:23:42.381 slat (usec): min=4, max=8056, avg=37.47, stdev=351.30 00:23:42.381 clat (msec): min=5, max=144, avg=83.59, stdev=27.66 00:23:42.381 lat (msec): min=5, max=144, avg=83.63, stdev=27.68 00:23:42.381 clat percentiles (msec): 00:23:42.381 | 1.00th=[ 6], 5.00th=[ 37], 10.00th=[ 51], 20.00th=[ 61], 00:23:42.381 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 92], 00:23:42.381 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 121], 00:23:42.381 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:23:42.381 | 99.99th=[ 144] 00:23:42.381 bw ( KiB/s): min= 608, max= 1592, per=4.38%, avg=761.90, stdev=224.36, samples=20 00:23:42.381 iops : min= 152, max= 398, avg=190.45, stdev=56.07, samples=20 00:23:42.381 lat (msec) : 10=2.51%, 20=0.84%, 50=6.63%, 100=56.50%, 250=33.52% 00:23:42.381 cpu : usr=38.43%, sys=1.65%, ctx=1330, majf=0, minf=9 00:23:42.381 IO depths : 1=0.2%, 2=0.5%, 4=1.4%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:42.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.381 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.382 filename2: (groupid=0, jobs=1): err= 0: pid=83854: Wed Jul 24 23:24:02 2024 00:23:42.382 read: IOPS=176, BW=708KiB/s (725kB/s)(7096KiB/10023msec) 00:23:42.382 slat (nsec): min=4012, max=60791, avg=19893.61, stdev=10117.11 00:23:42.382 clat (msec): min=31, max=157, avg=90.23, stdev=26.27 00:23:42.382 lat (msec): min=31, max=157, avg=90.25, stdev=26.27 00:23:42.382 clat percentiles (msec): 00:23:42.382 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 67], 00:23:42.382 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 101], 00:23:42.382 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 129], 00:23:42.382 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:23:42.382 | 99.99th=[ 157] 00:23:42.382 bw ( KiB/s): min= 496, max= 1024, per=4.06%, avg=706.00, stdev=155.19, samples=20 00:23:42.382 iops : min= 124, max= 256, avg=176.50, stdev=38.80, samples=20 00:23:42.382 lat (msec) : 50=5.52%, 100=55.24%, 250=39.23% 00:23:42.382 cpu : usr=35.32%, sys=1.61%, ctx=1457, majf=0, minf=9 00:23:42.382 IO depths : 1=0.1%, 2=2.1%, 4=8.5%, 8=74.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:42.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.382 complete : 0=0.0%, 4=89.2%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.382 issued rwts: total=1774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:42.382 00:23:42.382 Run status group 0 (all jobs): 00:23:42.382 READ: bw=17.0MiB/s (17.8MB/s), 638KiB/s-763KiB/s (653kB/s-781kB/s), io=171MiB (179MB), run=10004-10057msec 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 bdev_null0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 [2024-07-24 23:24:03.144327] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 bdev_null1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:42.382 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.382 { 00:23:42.382 "params": { 00:23:42.382 "name": "Nvme$subsystem", 00:23:42.382 "trtype": "$TEST_TRANSPORT", 00:23:42.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.382 "adrfam": "ipv4", 00:23:42.382 "trsvcid": "$NVMF_PORT", 00:23:42.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.382 "hdgst": ${hdgst:-false}, 00:23:42.382 "ddgst": ${ddgst:-false} 00:23:42.382 }, 00:23:42.382 "method": "bdev_nvme_attach_controller" 00:23:42.382 } 00:23:42.382 EOF 00:23:42.382 )") 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:42.383 { 00:23:42.383 "params": { 00:23:42.383 "name": "Nvme$subsystem", 00:23:42.383 "trtype": "$TEST_TRANSPORT", 00:23:42.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.383 "adrfam": "ipv4", 00:23:42.383 "trsvcid": "$NVMF_PORT", 00:23:42.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.383 "hdgst": ${hdgst:-false}, 00:23:42.383 "ddgst": ${ddgst:-false} 00:23:42.383 }, 00:23:42.383 "method": "bdev_nvme_attach_controller" 00:23:42.383 } 00:23:42.383 EOF 00:23:42.383 )") 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:42.383 "params": { 00:23:42.383 "name": "Nvme0", 00:23:42.383 "trtype": "tcp", 00:23:42.383 "traddr": "10.0.0.2", 00:23:42.383 "adrfam": "ipv4", 00:23:42.383 "trsvcid": "4420", 00:23:42.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.383 "hdgst": false, 00:23:42.383 "ddgst": false 00:23:42.383 }, 00:23:42.383 "method": "bdev_nvme_attach_controller" 00:23:42.383 },{ 00:23:42.383 "params": { 00:23:42.383 "name": "Nvme1", 00:23:42.383 "trtype": "tcp", 00:23:42.383 "traddr": "10.0.0.2", 00:23:42.383 "adrfam": "ipv4", 00:23:42.383 "trsvcid": "4420", 00:23:42.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.383 "hdgst": false, 00:23:42.383 "ddgst": false 00:23:42.383 }, 00:23:42.383 "method": "bdev_nvme_attach_controller" 00:23:42.383 }' 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:42.383 23:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.383 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:42.383 ... 00:23:42.383 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:42.383 ... 00:23:42.383 fio-3.35 00:23:42.383 Starting 4 threads 00:23:46.597 00:23:46.597 filename0: (groupid=0, jobs=1): err= 0: pid=83991: Wed Jul 24 23:24:08 2024 00:23:46.597 read: IOPS=2135, BW=16.7MiB/s (17.5MB/s)(83.4MiB/5002msec) 00:23:46.597 slat (usec): min=3, max=122, avg=16.58, stdev= 8.92 00:23:46.597 clat (usec): min=1022, max=7275, avg=3699.14, stdev=1001.41 00:23:46.597 lat (usec): min=1030, max=7300, avg=3715.71, stdev=1002.03 00:23:46.597 clat percentiles (usec): 00:23:46.597 | 1.00th=[ 1942], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2606], 00:23:46.597 | 30.00th=[ 2900], 40.00th=[ 3425], 50.00th=[ 3982], 60.00th=[ 4293], 00:23:46.597 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4948], 00:23:46.597 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5932], 99.95th=[ 6980], 00:23:46.597 | 99.99th=[ 7242] 00:23:46.597 bw ( KiB/s): min=14624, max=18592, per=26.77%, avg=17084.33, stdev=1529.81, samples=9 00:23:46.597 iops : min= 1828, max= 2324, avg=2135.44, stdev=191.33, samples=9 00:23:46.597 lat (msec) : 2=2.04%, 4=48.96%, 10=49.00% 00:23:46.597 cpu : usr=93.12%, sys=5.94%, ctx=66, majf=0, minf=9 00:23:46.597 IO depths : 1=0.2%, 2=6.3%, 4=60.9%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 complete : 0=0.0%, 4=97.6%, 8=2.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 issued rwts: total=10680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:46.597 filename0: (groupid=0, jobs=1): err= 0: pid=83992: Wed Jul 24 23:24:08 2024 00:23:46.597 read: IOPS=1748, BW=13.7MiB/s (14.3MB/s)(68.3MiB/5001msec) 00:23:46.597 slat (nsec): min=4158, max=83137, avg=17362.88, stdev=9445.13 00:23:46.597 clat (usec): min=494, max=7865, avg=4507.31, stdev=737.68 00:23:46.597 lat (usec): min=505, max=7913, avg=4524.67, stdev=736.54 00:23:46.597 clat percentiles (usec): 00:23:46.597 | 1.00th=[ 2114], 5.00th=[ 2933], 10.00th=[ 3818], 20.00th=[ 4015], 00:23:46.597 | 30.00th=[ 4359], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:23:46.597 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5669], 00:23:46.597 | 99.00th=[ 6325], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[ 7111], 00:23:46.597 | 99.99th=[ 7898] 00:23:46.597 bw ( KiB/s): min=13184, max=15600, per=22.22%, avg=14182.78, stdev=885.66, samples=9 00:23:46.597 iops : min= 1648, max= 1950, avg=1772.78, stdev=110.66, samples=9 00:23:46.597 lat (usec) : 500=0.01% 00:23:46.597 lat (msec) : 2=0.87%, 4=18.50%, 10=80.62% 00:23:46.597 cpu : usr=93.00%, sys=6.20%, ctx=7, majf=0, minf=0 00:23:46.597 IO depths : 1=0.4%, 2=21.3%, 4=52.2%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 issued rwts: total=8744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:46.597 filename1: (groupid=0, jobs=1): err= 0: pid=83993: Wed Jul 24 23:24:08 2024 00:23:46.597 read: IOPS=2076, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:23:46.597 slat (nsec): min=7278, max=83485, avg=19124.18, stdev=9502.68 00:23:46.597 clat (usec): min=979, max=7554, avg=3793.32, stdev=1011.68 00:23:46.597 lat (usec): min=990, max=7573, avg=3812.44, stdev=1010.97 00:23:46.597 clat percentiles (usec): 00:23:46.597 | 1.00th=[ 1631], 5.00th=[ 2040], 10.00th=[ 2147], 20.00th=[ 2704], 00:23:46.597 | 30.00th=[ 3097], 40.00th=[ 3851], 50.00th=[ 4113], 60.00th=[ 4424], 00:23:46.597 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:23:46.597 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[ 6390], 99.95th=[ 7046], 00:23:46.597 | 99.99th=[ 7373] 00:23:46.597 bw ( KiB/s): min=14080, max=18592, per=25.96%, avg=16572.44, stdev=1817.19, samples=9 00:23:46.597 iops : min= 1760, max= 2324, avg=2071.56, stdev=227.15, samples=9 00:23:46.597 lat (usec) : 1000=0.03% 00:23:46.597 lat (msec) : 2=3.52%, 4=43.78%, 10=52.67% 00:23:46.597 cpu : usr=93.32%, sys=5.70%, ctx=10, majf=0, minf=9 00:23:46.597 IO depths : 1=0.3%, 2=8.5%, 4=59.9%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 issued rwts: total=10387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:46.597 filename1: (groupid=0, jobs=1): err= 0: pid=83994: Wed Jul 24 23:24:08 2024 00:23:46.597 read: IOPS=2018, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5002msec) 00:23:46.597 slat (nsec): min=6683, max=97424, avg=15007.16, stdev=9064.66 00:23:46.597 clat (usec): min=497, max=7315, avg=3914.83, stdev=1013.45 00:23:46.597 lat (usec): min=510, max=7384, avg=3929.83, stdev=1014.25 00:23:46.597 clat percentiles (usec): 00:23:46.597 | 1.00th=[ 1221], 5.00th=[ 1893], 10.00th=[ 2343], 20.00th=[ 2966], 00:23:46.597 | 30.00th=[ 3523], 40.00th=[ 3949], 50.00th=[ 4178], 60.00th=[ 4555], 00:23:46.597 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4948], 00:23:46.597 | 99.00th=[ 5276], 99.50th=[ 5342], 99.90th=[ 6194], 99.95th=[ 6783], 00:23:46.597 | 99.99th=[ 7242] 00:23:46.597 bw ( KiB/s): min=13440, max=18704, per=24.90%, avg=15891.56, stdev=2107.81, samples=9 00:23:46.597 iops : min= 1680, max= 2338, avg=1986.44, stdev=263.48, samples=9 00:23:46.597 lat (usec) : 500=0.01%, 750=0.12%, 1000=0.09% 00:23:46.597 lat (msec) : 2=5.76%, 4=37.20%, 10=56.82% 00:23:46.597 cpu : usr=92.64%, sys=6.48%, ctx=20, majf=0, minf=0 00:23:46.597 IO depths : 1=0.2%, 2=11.3%, 4=58.6%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.597 issued rwts: total=10098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.597 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:46.597 00:23:46.597 Run status group 0 (all jobs): 00:23:46.597 READ: bw=62.3MiB/s (65.4MB/s), 13.7MiB/s-16.7MiB/s (14.3MB/s-17.5MB/s), io=312MiB (327MB), run=5001-5002msec 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.856 00:23:46.856 real 0m23.584s 00:23:46.856 user 2m5.646s 00:23:46.856 sys 0m6.875s 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:46.856 23:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.856 ************************************ 00:23:46.856 END TEST fio_dif_rand_params 00:23:46.856 ************************************ 00:23:46.856 23:24:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:46.856 23:24:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:46.856 23:24:09 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:46.856 23:24:09 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:46.856 23:24:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.856 ************************************ 00:23:46.856 START TEST fio_dif_digest 00:23:46.856 ************************************ 00:23:46.856 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:23:46.856 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:46.856 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:46.857 bdev_null0 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.857 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:47.116 [2024-07-24 23:24:09.355097] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:47.116 { 00:23:47.116 "params": { 00:23:47.116 "name": "Nvme$subsystem", 00:23:47.116 "trtype": "$TEST_TRANSPORT", 00:23:47.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.116 "adrfam": "ipv4", 00:23:47.116 "trsvcid": "$NVMF_PORT", 00:23:47.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.116 "hdgst": ${hdgst:-false}, 00:23:47.116 "ddgst": ${ddgst:-false} 00:23:47.116 }, 00:23:47.116 "method": "bdev_nvme_attach_controller" 00:23:47.116 } 00:23:47.116 EOF 00:23:47.116 )") 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:47.116 "params": { 00:23:47.116 "name": "Nvme0", 00:23:47.116 "trtype": "tcp", 00:23:47.116 "traddr": "10.0.0.2", 00:23:47.116 "adrfam": "ipv4", 00:23:47.116 "trsvcid": "4420", 00:23:47.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:47.116 "hdgst": true, 00:23:47.116 "ddgst": true 00:23:47.116 }, 00:23:47.116 "method": "bdev_nvme_attach_controller" 00:23:47.116 }' 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:47.116 23:24:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.116 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:47.116 ... 00:23:47.116 fio-3.35 00:23:47.116 Starting 3 threads 00:23:59.322 00:23:59.322 filename0: (groupid=0, jobs=1): err= 0: pid=84100: Wed Jul 24 23:24:20 2024 00:23:59.322 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(275MiB/10001msec) 00:23:59.322 slat (nsec): min=7187, max=48010, avg=10701.24, stdev=4392.41 00:23:59.322 clat (usec): min=10575, max=16647, avg=13636.08, stdev=253.15 00:23:59.322 lat (usec): min=10583, max=16673, avg=13646.78, stdev=253.39 00:23:59.322 clat percentiles (usec): 00:23:59.322 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:23:59.322 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:23:59.322 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[13960], 00:23:59.322 | 99.00th=[14222], 99.50th=[14353], 99.90th=[16581], 99.95th=[16581], 00:23:59.322 | 99.99th=[16712] 00:23:59.322 bw ( KiB/s): min=26933, max=28416, per=33.32%, avg=28095.42, stdev=458.60, samples=19 00:23:59.322 iops : min= 210, max= 222, avg=219.47, stdev= 3.64, samples=19 00:23:59.322 lat (msec) : 20=100.00% 00:23:59.322 cpu : usr=91.24%, sys=8.17%, ctx=6, majf=0, minf=9 00:23:59.322 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.322 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.322 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:59.322 filename0: (groupid=0, jobs=1): err= 0: pid=84101: Wed Jul 24 23:24:20 2024 00:23:59.322 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(275MiB/10006msec) 00:23:59.322 slat (nsec): min=6998, max=50070, avg=10644.00, stdev=4689.14 00:23:59.322 clat (usec): min=5910, max=14789, avg=13624.24, stdev=358.38 00:23:59.322 lat (usec): min=5917, max=14802, avg=13634.88, stdev=358.66 00:23:59.322 clat percentiles (usec): 00:23:59.322 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:23:59.322 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:23:59.322 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13829], 95.00th=[13960], 00:23:59.322 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14746], 99.95th=[14746], 00:23:59.322 | 99.99th=[14746] 00:23:59.322 bw ( KiB/s): min=27648, max=28416, per=33.32%, avg=28092.63, stdev=389.57, samples=19 00:23:59.322 iops : min= 216, max= 222, avg=219.47, stdev= 3.04, samples=19 00:23:59.322 lat (msec) : 10=0.14%, 20=99.86% 00:23:59.322 cpu : usr=91.95%, sys=7.47%, ctx=11, majf=0, minf=0 00:23:59.322 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.322 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.322 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:59.322 filename0: (groupid=0, jobs=1): err= 0: pid=84102: Wed Jul 24 23:24:20 2024 00:23:59.322 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(275MiB/10001msec) 00:23:59.322 slat (nsec): min=7157, max=58639, avg=11362.65, stdev=5432.12 00:23:59.322 clat (usec): min=12935, max=14649, avg=13633.73, stdev=194.58 00:23:59.322 lat (usec): min=12943, max=14662, avg=13645.09, stdev=195.29 00:23:59.322 clat percentiles (usec): 00:23:59.322 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:23:59.322 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:23:59.322 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13829], 95.00th=[13960], 00:23:59.322 | 99.00th=[14222], 99.50th=[14222], 99.90th=[14615], 99.95th=[14615], 00:23:59.322 | 99.99th=[14615] 00:23:59.322 bw ( KiB/s): min=27648, max=28416, per=33.32%, avg=28092.63, stdev=389.57, samples=19 00:23:59.322 iops : min= 216, max= 222, avg=219.47, stdev= 3.04, samples=19 00:23:59.322 lat (msec) : 20=100.00% 00:23:59.322 cpu : usr=91.77%, sys=7.58%, ctx=18, majf=0, minf=0 00:23:59.322 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.322 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.322 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:59.322 00:23:59.322 Run status group 0 (all jobs): 00:23:59.322 READ: bw=82.3MiB/s (86.3MB/s), 27.4MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=824MiB (864MB), run=10001-10006msec 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.322 ************************************ 00:23:59.322 END TEST fio_dif_digest 00:23:59.322 ************************************ 00:23:59.322 00:23:59.322 real 0m11.121s 00:23:59.322 user 0m28.263s 00:23:59.322 sys 0m2.592s 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.322 23:24:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:59.322 23:24:20 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:59.322 23:24:20 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:59.322 23:24:20 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.322 rmmod nvme_tcp 00:23:59.322 rmmod nvme_fabrics 00:23:59.322 rmmod nvme_keyring 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83357 ']' 00:23:59.322 23:24:20 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83357 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83357 ']' 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83357 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83357 00:23:59.323 killing process with pid 83357 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83357' 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83357 00:23:59.323 23:24:20 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83357 00:23:59.323 23:24:20 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:59.323 23:24:20 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:59.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.323 Waiting for block devices as requested 00:23:59.323 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:59.323 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:59.323 23:24:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.323 23:24:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.323 23:24:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.323 23:24:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.323 23:24:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.323 23:24:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:59.323 23:24:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.323 23:24:21 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:59.323 00:23:59.323 real 1m0.282s 00:23:59.323 user 3m49.312s 00:23:59.323 sys 0m18.748s 00:23:59.323 ************************************ 00:23:59.323 END TEST nvmf_dif 00:23:59.323 ************************************ 00:23:59.323 23:24:21 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.323 23:24:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:59.323 23:24:21 -- common/autotest_common.sh@1142 -- # return 0 00:23:59.323 23:24:21 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:59.323 23:24:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:59.323 23:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.323 23:24:21 -- common/autotest_common.sh@10 -- # set +x 00:23:59.323 ************************************ 00:23:59.323 START TEST nvmf_abort_qd_sizes 00:23:59.323 ************************************ 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:59.323 * Looking for test storage... 00:23:59.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:59.323 Cannot find device "nvmf_tgt_br" 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.323 Cannot find device "nvmf_tgt_br2" 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:59.323 Cannot find device "nvmf_tgt_br" 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:59.323 Cannot find device "nvmf_tgt_br2" 00:23:59.323 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:59.324 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:59.581 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:59.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:59.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:59.582 23:24:21 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:59.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:23:59.582 00:23:59.582 --- 10.0.0.2 ping statistics --- 00:23:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.582 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:59.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:59.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:59.582 00:23:59.582 --- 10.0.0.3 ping statistics --- 00:23:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.582 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:59.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:23:59.582 00:23:59.582 --- 10.0.0.1 ping statistics --- 00:23:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.582 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:59.582 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:00.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:00.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:00.517 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84695 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84695 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84695 ']' 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.517 23:24:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:00.776 [2024-07-24 23:24:23.015119] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:00.776 [2024-07-24 23:24:23.015250] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.776 [2024-07-24 23:24:23.156756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.034 [2024-07-24 23:24:23.315297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.034 [2024-07-24 23:24:23.315561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.034 [2024-07-24 23:24:23.315702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.034 [2024-07-24 23:24:23.315803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.034 [2024-07-24 23:24:23.315887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.034 [2024-07-24 23:24:23.316126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.034 [2024-07-24 23:24:23.316219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.034 [2024-07-24 23:24:23.316413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.034 [2024-07-24 23:24:23.316352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.034 [2024-07-24 23:24:23.392484] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:01.600 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.600 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:24:01.600 23:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.600 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.600 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.858 23:24:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:01.858 ************************************ 00:24:01.858 START TEST spdk_target_abort 00:24:01.858 ************************************ 00:24:01.858 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:24:01.858 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:01.858 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:01.858 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:01.859 spdk_targetn1 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:01.859 [2024-07-24 23:24:24.242022] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:01.859 [2024-07-24 23:24:24.270631] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:01.859 23:24:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:05.146 Initializing NVMe Controllers 00:24:05.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:05.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:05.146 Initialization complete. Launching workers. 00:24:05.146 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10316, failed: 0 00:24:05.146 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1042, failed to submit 9274 00:24:05.146 success 740, unsuccess 302, failed 0 00:24:05.146 23:24:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:05.146 23:24:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:08.429 Initializing NVMe Controllers 00:24:08.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:08.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:08.429 Initialization complete. Launching workers. 00:24:08.429 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8880, failed: 0 00:24:08.429 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1155, failed to submit 7725 00:24:08.429 success 376, unsuccess 779, failed 0 00:24:08.429 23:24:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:08.429 23:24:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:11.712 Initializing NVMe Controllers 00:24:11.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:11.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:11.712 Initialization complete. Launching workers. 00:24:11.712 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31848, failed: 0 00:24:11.712 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2268, failed to submit 29580 00:24:11.712 success 483, unsuccess 1785, failed 0 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.712 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84695 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84695 ']' 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84695 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84695 00:24:12.278 killing process with pid 84695 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84695' 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84695 00:24:12.278 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84695 00:24:12.536 00:24:12.536 real 0m10.818s 00:24:12.536 user 0m43.715s 00:24:12.536 sys 0m2.287s 00:24:12.536 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.536 ************************************ 00:24:12.536 END TEST spdk_target_abort 00:24:12.536 ************************************ 00:24:12.536 23:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:12.536 23:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:12.536 23:24:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:12.536 23:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:12.536 23:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.536 23:24:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:12.795 ************************************ 00:24:12.796 START TEST kernel_target_abort 00:24:12.796 ************************************ 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:12.796 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:13.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:13.054 Waiting for block devices as requested 00:24:13.054 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:13.313 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:13.313 No valid GPT data, bailing 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:13.313 No valid GPT data, bailing 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:13.313 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:13.572 No valid GPT data, bailing 00:24:13.572 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:13.572 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:13.573 No valid GPT data, bailing 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 --hostid=e26f5e1a-ae07-4101-a640-4712c9abba53 -a 10.0.0.1 -t tcp -s 4420 00:24:13.573 00:24:13.573 Discovery Log Number of Records 2, Generation counter 2 00:24:13.573 =====Discovery Log Entry 0====== 00:24:13.573 trtype: tcp 00:24:13.573 adrfam: ipv4 00:24:13.573 subtype: current discovery subsystem 00:24:13.573 treq: not specified, sq flow control disable supported 00:24:13.573 portid: 1 00:24:13.573 trsvcid: 4420 00:24:13.573 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:13.573 traddr: 10.0.0.1 00:24:13.573 eflags: none 00:24:13.573 sectype: none 00:24:13.573 =====Discovery Log Entry 1====== 00:24:13.573 trtype: tcp 00:24:13.573 adrfam: ipv4 00:24:13.573 subtype: nvme subsystem 00:24:13.573 treq: not specified, sq flow control disable supported 00:24:13.573 portid: 1 00:24:13.573 trsvcid: 4420 00:24:13.573 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:13.573 traddr: 10.0.0.1 00:24:13.573 eflags: none 00:24:13.573 sectype: none 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.573 23:24:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:13.573 23:24:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:13.573 23:24:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:16.858 Initializing NVMe Controllers 00:24:16.858 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:16.858 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:16.858 Initialization complete. Launching workers. 00:24:16.858 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33210, failed: 0 00:24:16.858 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33210, failed to submit 0 00:24:16.858 success 0, unsuccess 33210, failed 0 00:24:16.858 23:24:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:16.858 23:24:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:20.166 Initializing NVMe Controllers 00:24:20.166 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:20.166 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:20.166 Initialization complete. Launching workers. 00:24:20.166 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69231, failed: 0 00:24:20.166 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29820, failed to submit 39411 00:24:20.166 success 0, unsuccess 29820, failed 0 00:24:20.166 23:24:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:20.166 23:24:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:23.450 Initializing NVMe Controllers 00:24:23.450 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:23.450 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:23.450 Initialization complete. Launching workers. 00:24:23.450 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80680, failed: 0 00:24:23.450 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20110, failed to submit 60570 00:24:23.450 success 0, unsuccess 20110, failed 0 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:23.450 23:24:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:24.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:25.920 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:25.920 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:25.920 00:24:25.920 real 0m12.998s 00:24:25.920 user 0m6.149s 00:24:25.920 sys 0m4.301s 00:24:25.920 23:24:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.920 23:24:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:25.920 ************************************ 00:24:25.920 END TEST kernel_target_abort 00:24:25.920 ************************************ 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.920 rmmod nvme_tcp 00:24:25.920 rmmod nvme_fabrics 00:24:25.920 rmmod nvme_keyring 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84695 ']' 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84695 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84695 ']' 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84695 00:24:25.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84695) - No such process 00:24:25.920 Process with pid 84695 is not found 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84695 is not found' 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:25.920 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:26.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:26.178 Waiting for block devices as requested 00:24:26.178 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:26.437 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:26.437 00:24:26.437 real 0m27.144s 00:24:26.437 user 0m51.092s 00:24:26.437 sys 0m7.941s 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.437 23:24:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:26.437 ************************************ 00:24:26.437 END TEST nvmf_abort_qd_sizes 00:24:26.437 ************************************ 00:24:26.437 23:24:48 -- common/autotest_common.sh@1142 -- # return 0 00:24:26.437 23:24:48 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:26.437 23:24:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:26.437 23:24:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.437 23:24:48 -- common/autotest_common.sh@10 -- # set +x 00:24:26.437 ************************************ 00:24:26.437 START TEST keyring_file 00:24:26.437 ************************************ 00:24:26.437 23:24:48 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:26.437 * Looking for test storage... 00:24:26.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:26.437 23:24:48 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:26.437 23:24:48 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:26.437 23:24:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.437 23:24:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.437 23:24:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.437 23:24:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.437 23:24:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.437 23:24:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.437 23:24:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:26.437 23:24:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.437 23:24:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.437 23:24:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:26.437 23:24:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:26.437 23:24:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:26.437 23:24:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:26.437 23:24:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:26.438 23:24:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:26.438 23:24:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZSB4gpBp94 00:24:26.438 23:24:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:26.438 23:24:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:26.438 23:24:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.438 23:24:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:26.438 23:24:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:26.438 23:24:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:26.438 23:24:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZSB4gpBp94 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZSB4gpBp94 00:24:26.695 23:24:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ZSB4gpBp94 00:24:26.695 23:24:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oU84RYq6lP 00:24:26.695 23:24:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:26.695 23:24:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:26.695 23:24:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:26.695 23:24:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:26.695 23:24:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:26.695 23:24:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:26.695 23:24:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:26.695 23:24:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oU84RYq6lP 00:24:26.695 23:24:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oU84RYq6lP 00:24:26.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.695 23:24:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.oU84RYq6lP 00:24:26.695 23:24:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=85568 00:24:26.695 23:24:49 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:26.695 23:24:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85568 00:24:26.695 23:24:49 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85568 ']' 00:24:26.695 23:24:49 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.695 23:24:49 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.695 23:24:49 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.695 23:24:49 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.695 23:24:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:26.695 [2024-07-24 23:24:49.094321] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:26.695 [2024-07-24 23:24:49.094651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85568 ] 00:24:26.953 [2024-07-24 23:24:49.232300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.953 [2024-07-24 23:24:49.380782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.211 [2024-07-24 23:24:49.462005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:27.777 23:24:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:27.777 [2024-07-24 23:24:50.086011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.777 null0 00:24:27.777 [2024-07-24 23:24:50.117972] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.777 [2024-07-24 23:24:50.118423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:27.777 [2024-07-24 23:24:50.125956] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.777 23:24:50 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:27.777 [2024-07-24 23:24:50.137967] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:27.777 request: 00:24:27.777 { 00:24:27.777 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:27.777 "secure_channel": false, 00:24:27.777 "listen_address": { 00:24:27.777 "trtype": "tcp", 00:24:27.777 "traddr": "127.0.0.1", 00:24:27.777 "trsvcid": "4420" 00:24:27.777 }, 00:24:27.777 "method": "nvmf_subsystem_add_listener", 00:24:27.777 "req_id": 1 00:24:27.777 } 00:24:27.777 Got JSON-RPC error response 00:24:27.777 response: 00:24:27.777 { 00:24:27.777 "code": -32602, 00:24:27.777 "message": "Invalid parameters" 00:24:27.777 } 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:27.777 23:24:50 keyring_file -- keyring/file.sh@46 -- # bperfpid=85585 00:24:27.777 23:24:50 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85585 /var/tmp/bperf.sock 00:24:27.777 23:24:50 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85585 ']' 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:27.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.777 23:24:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:27.777 [2024-07-24 23:24:50.207625] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:27.777 [2024-07-24 23:24:50.207907] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85585 ] 00:24:28.035 [2024-07-24 23:24:50.347903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.035 [2024-07-24 23:24:50.493928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.293 [2024-07-24 23:24:50.570726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:28.859 23:24:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.859 23:24:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:28.859 23:24:51 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:28.859 23:24:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:29.118 23:24:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oU84RYq6lP 00:24:29.118 23:24:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oU84RYq6lP 00:24:29.376 23:24:51 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:29.376 23:24:51 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:29.376 23:24:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:29.376 23:24:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:29.376 23:24:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:29.633 23:24:51 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ZSB4gpBp94 == \/\t\m\p\/\t\m\p\.\Z\S\B\4\g\p\B\p\9\4 ]] 00:24:29.634 23:24:51 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:29.634 23:24:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:29.634 23:24:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:29.634 23:24:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:29.634 23:24:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:29.892 23:24:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.oU84RYq6lP == \/\t\m\p\/\t\m\p\.\o\U\8\4\R\Y\q\6\l\P ]] 00:24:29.892 23:24:52 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:29.892 23:24:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:29.892 23:24:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:29.892 23:24:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:29.892 23:24:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:29.892 23:24:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:30.150 23:24:52 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:30.150 23:24:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:30.150 23:24:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:30.150 23:24:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:30.150 23:24:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:30.150 23:24:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.150 23:24:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:30.150 23:24:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:30.150 23:24:52 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:30.150 23:24:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:30.408 [2024-07-24 23:24:52.821307] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.408 nvme0n1 00:24:30.666 23:24:52 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:30.666 23:24:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:30.666 23:24:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:30.666 23:24:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:30.666 23:24:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.666 23:24:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:30.960 23:24:53 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:30.960 23:24:53 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:30.960 23:24:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:30.960 23:24:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:30.960 23:24:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:30.960 23:24:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:30.960 23:24:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:30.960 23:24:53 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:30.960 23:24:53 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:31.217 Running I/O for 1 seconds... 00:24:32.151 00:24:32.151 Latency(us) 00:24:32.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.151 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:32.151 nvme0n1 : 1.01 11764.07 45.95 0.00 0.00 10841.38 4081.11 17396.83 00:24:32.151 =================================================================================================================== 00:24:32.151 Total : 11764.07 45.95 0.00 0.00 10841.38 4081.11 17396.83 00:24:32.151 0 00:24:32.151 23:24:54 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:32.151 23:24:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:32.410 23:24:54 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:32.410 23:24:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:32.410 23:24:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:32.410 23:24:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:32.410 23:24:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:32.410 23:24:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:32.668 23:24:55 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:32.668 23:24:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:32.668 23:24:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:32.668 23:24:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:32.668 23:24:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:32.668 23:24:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:32.668 23:24:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:32.927 23:24:55 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:32.927 23:24:55 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.927 23:24:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:32.927 23:24:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:33.186 [2024-07-24 23:24:55.563789] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:33.186 [2024-07-24 23:24:55.564345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a34f0 (107): Transport endpoint is not connected 00:24:33.186 [2024-07-24 23:24:55.565331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a34f0 (9): Bad file descriptor 00:24:33.186 [2024-07-24 23:24:55.566328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:33.186 [2024-07-24 23:24:55.566368] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:33.186 [2024-07-24 23:24:55.566379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:33.186 request: 00:24:33.186 { 00:24:33.186 "name": "nvme0", 00:24:33.186 "trtype": "tcp", 00:24:33.186 "traddr": "127.0.0.1", 00:24:33.186 "adrfam": "ipv4", 00:24:33.186 "trsvcid": "4420", 00:24:33.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:33.186 "prchk_reftag": false, 00:24:33.186 "prchk_guard": false, 00:24:33.186 "hdgst": false, 00:24:33.186 "ddgst": false, 00:24:33.186 "psk": "key1", 00:24:33.186 "method": "bdev_nvme_attach_controller", 00:24:33.186 "req_id": 1 00:24:33.186 } 00:24:33.186 Got JSON-RPC error response 00:24:33.186 response: 00:24:33.186 { 00:24:33.186 "code": -5, 00:24:33.186 "message": "Input/output error" 00:24:33.186 } 00:24:33.186 23:24:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:33.186 23:24:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.186 23:24:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.186 23:24:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.186 23:24:55 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:33.186 23:24:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:33.186 23:24:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:33.186 23:24:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:33.186 23:24:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:33.186 23:24:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:33.444 23:24:55 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:33.444 23:24:55 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:33.444 23:24:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:33.444 23:24:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:33.444 23:24:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:33.444 23:24:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:33.444 23:24:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:33.703 23:24:56 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:33.703 23:24:56 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:33.703 23:24:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:33.962 23:24:56 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:33.962 23:24:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:34.221 23:24:56 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:34.221 23:24:56 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:34.221 23:24:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:34.481 23:24:56 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:34.481 23:24:56 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ZSB4gpBp94 00:24:34.481 23:24:56 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:34.481 23:24:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:34.481 23:24:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:34.740 [2024-07-24 23:24:57.106767] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZSB4gpBp94': 0100660 00:24:34.740 [2024-07-24 23:24:57.106809] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:34.740 request: 00:24:34.740 { 00:24:34.740 "name": "key0", 00:24:34.740 "path": "/tmp/tmp.ZSB4gpBp94", 00:24:34.740 "method": "keyring_file_add_key", 00:24:34.740 "req_id": 1 00:24:34.740 } 00:24:34.740 Got JSON-RPC error response 00:24:34.740 response: 00:24:34.740 { 00:24:34.740 "code": -1, 00:24:34.740 "message": "Operation not permitted" 00:24:34.740 } 00:24:34.740 23:24:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:34.740 23:24:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:34.740 23:24:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:34.740 23:24:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:34.740 23:24:57 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ZSB4gpBp94 00:24:34.740 23:24:57 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:34.740 23:24:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZSB4gpBp94 00:24:34.999 23:24:57 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ZSB4gpBp94 00:24:34.999 23:24:57 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:34.999 23:24:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:34.999 23:24:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:34.999 23:24:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:34.999 23:24:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:34.999 23:24:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:35.258 23:24:57 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:35.258 23:24:57 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:35.258 23:24:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:35.258 23:24:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:35.516 [2024-07-24 23:24:57.758919] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ZSB4gpBp94': No such file or directory 00:24:35.517 [2024-07-24 23:24:57.758964] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:35.517 [2024-07-24 23:24:57.759005] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:35.517 [2024-07-24 23:24:57.759014] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:35.517 [2024-07-24 23:24:57.759022] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:35.517 request: 00:24:35.517 { 00:24:35.517 "name": "nvme0", 00:24:35.517 "trtype": "tcp", 00:24:35.517 "traddr": "127.0.0.1", 00:24:35.517 "adrfam": "ipv4", 00:24:35.517 "trsvcid": "4420", 00:24:35.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:35.517 "prchk_reftag": false, 00:24:35.517 "prchk_guard": false, 00:24:35.517 "hdgst": false, 00:24:35.517 "ddgst": false, 00:24:35.517 "psk": "key0", 00:24:35.517 "method": "bdev_nvme_attach_controller", 00:24:35.517 "req_id": 1 00:24:35.517 } 00:24:35.517 Got JSON-RPC error response 00:24:35.517 response: 00:24:35.517 { 00:24:35.517 "code": -19, 00:24:35.517 "message": "No such device" 00:24:35.517 } 00:24:35.517 23:24:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:24:35.517 23:24:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:35.517 23:24:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:35.517 23:24:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:35.517 23:24:57 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:35.517 23:24:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:35.776 23:24:58 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.olYFrA1O6E 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:35.776 23:24:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:35.776 23:24:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:24:35.776 23:24:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:35.776 23:24:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:35.776 23:24:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:24:35.776 23:24:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.olYFrA1O6E 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.olYFrA1O6E 00:24:35.776 23:24:58 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.olYFrA1O6E 00:24:35.776 23:24:58 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.olYFrA1O6E 00:24:35.776 23:24:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.olYFrA1O6E 00:24:36.035 23:24:58 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:36.035 23:24:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:36.294 nvme0n1 00:24:36.294 23:24:58 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:36.294 23:24:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:36.294 23:24:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:36.294 23:24:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:36.294 23:24:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:36.294 23:24:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:36.552 23:24:58 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:36.552 23:24:58 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:36.552 23:24:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:36.811 23:24:59 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:36.811 23:24:59 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:36.811 23:24:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:36.811 23:24:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:36.811 23:24:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:37.088 23:24:59 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:37.088 23:24:59 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:37.088 23:24:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:37.088 23:24:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:37.088 23:24:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:37.088 23:24:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:37.088 23:24:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:37.346 23:24:59 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:37.347 23:24:59 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:37.347 23:24:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:37.605 23:24:59 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:37.605 23:24:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:37.605 23:24:59 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:37.864 23:25:00 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:37.864 23:25:00 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.olYFrA1O6E 00:24:37.864 23:25:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.olYFrA1O6E 00:24:37.864 23:25:00 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oU84RYq6lP 00:24:37.864 23:25:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oU84RYq6lP 00:24:38.122 23:25:00 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:38.122 23:25:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:38.380 nvme0n1 00:24:38.380 23:25:00 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:38.380 23:25:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:38.948 23:25:01 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:38.948 "subsystems": [ 00:24:38.949 { 00:24:38.949 "subsystem": "keyring", 00:24:38.949 "config": [ 00:24:38.949 { 00:24:38.949 "method": "keyring_file_add_key", 00:24:38.949 "params": { 00:24:38.949 "name": "key0", 00:24:38.949 "path": "/tmp/tmp.olYFrA1O6E" 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "keyring_file_add_key", 00:24:38.949 "params": { 00:24:38.949 "name": "key1", 00:24:38.949 "path": "/tmp/tmp.oU84RYq6lP" 00:24:38.949 } 00:24:38.949 } 00:24:38.949 ] 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "subsystem": "iobuf", 00:24:38.949 "config": [ 00:24:38.949 { 00:24:38.949 "method": "iobuf_set_options", 00:24:38.949 "params": { 00:24:38.949 "small_pool_count": 8192, 00:24:38.949 "large_pool_count": 1024, 00:24:38.949 "small_bufsize": 8192, 00:24:38.949 "large_bufsize": 135168 00:24:38.949 } 00:24:38.949 } 00:24:38.949 ] 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "subsystem": "sock", 00:24:38.949 "config": [ 00:24:38.949 { 00:24:38.949 "method": "sock_set_default_impl", 00:24:38.949 "params": { 00:24:38.949 "impl_name": "uring" 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "sock_impl_set_options", 00:24:38.949 "params": { 00:24:38.949 "impl_name": "ssl", 00:24:38.949 "recv_buf_size": 4096, 00:24:38.949 "send_buf_size": 4096, 00:24:38.949 "enable_recv_pipe": true, 00:24:38.949 "enable_quickack": false, 00:24:38.949 "enable_placement_id": 0, 00:24:38.949 "enable_zerocopy_send_server": true, 00:24:38.949 "enable_zerocopy_send_client": false, 00:24:38.949 "zerocopy_threshold": 0, 00:24:38.949 "tls_version": 0, 00:24:38.949 "enable_ktls": false 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "sock_impl_set_options", 00:24:38.949 "params": { 00:24:38.949 "impl_name": "posix", 00:24:38.949 "recv_buf_size": 2097152, 00:24:38.949 "send_buf_size": 2097152, 00:24:38.949 "enable_recv_pipe": true, 00:24:38.949 "enable_quickack": false, 00:24:38.949 "enable_placement_id": 0, 00:24:38.949 "enable_zerocopy_send_server": true, 00:24:38.949 "enable_zerocopy_send_client": false, 00:24:38.949 "zerocopy_threshold": 0, 00:24:38.949 "tls_version": 0, 00:24:38.949 "enable_ktls": false 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "sock_impl_set_options", 00:24:38.949 "params": { 00:24:38.949 "impl_name": "uring", 00:24:38.949 "recv_buf_size": 2097152, 00:24:38.949 "send_buf_size": 2097152, 00:24:38.949 "enable_recv_pipe": true, 00:24:38.949 "enable_quickack": false, 00:24:38.949 "enable_placement_id": 0, 00:24:38.949 "enable_zerocopy_send_server": false, 00:24:38.949 "enable_zerocopy_send_client": false, 00:24:38.949 "zerocopy_threshold": 0, 00:24:38.949 "tls_version": 0, 00:24:38.949 "enable_ktls": false 00:24:38.949 } 00:24:38.949 } 00:24:38.949 ] 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "subsystem": "vmd", 00:24:38.949 "config": [] 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "subsystem": "accel", 00:24:38.949 "config": [ 00:24:38.949 { 00:24:38.949 "method": "accel_set_options", 00:24:38.949 "params": { 00:24:38.949 "small_cache_size": 128, 00:24:38.949 "large_cache_size": 16, 00:24:38.949 "task_count": 2048, 00:24:38.949 "sequence_count": 2048, 00:24:38.949 "buf_count": 2048 00:24:38.949 } 00:24:38.949 } 00:24:38.949 ] 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "subsystem": "bdev", 00:24:38.949 "config": [ 00:24:38.949 { 00:24:38.949 "method": "bdev_set_options", 00:24:38.949 "params": { 00:24:38.949 "bdev_io_pool_size": 65535, 00:24:38.949 "bdev_io_cache_size": 256, 00:24:38.949 "bdev_auto_examine": true, 00:24:38.949 "iobuf_small_cache_size": 128, 00:24:38.949 "iobuf_large_cache_size": 16 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "bdev_raid_set_options", 00:24:38.949 "params": { 00:24:38.949 "process_window_size_kb": 1024 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "bdev_iscsi_set_options", 00:24:38.949 "params": { 00:24:38.949 "timeout_sec": 30 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "bdev_nvme_set_options", 00:24:38.949 "params": { 00:24:38.949 "action_on_timeout": "none", 00:24:38.949 "timeout_us": 0, 00:24:38.949 "timeout_admin_us": 0, 00:24:38.949 "keep_alive_timeout_ms": 10000, 00:24:38.949 "arbitration_burst": 0, 00:24:38.949 "low_priority_weight": 0, 00:24:38.949 "medium_priority_weight": 0, 00:24:38.949 "high_priority_weight": 0, 00:24:38.949 "nvme_adminq_poll_period_us": 10000, 00:24:38.949 "nvme_ioq_poll_period_us": 0, 00:24:38.949 "io_queue_requests": 512, 00:24:38.949 "delay_cmd_submit": true, 00:24:38.949 "transport_retry_count": 4, 00:24:38.949 "bdev_retry_count": 3, 00:24:38.949 "transport_ack_timeout": 0, 00:24:38.949 "ctrlr_loss_timeout_sec": 0, 00:24:38.949 "reconnect_delay_sec": 0, 00:24:38.949 "fast_io_fail_timeout_sec": 0, 00:24:38.949 "disable_auto_failback": false, 00:24:38.949 "generate_uuids": false, 00:24:38.949 "transport_tos": 0, 00:24:38.949 "nvme_error_stat": false, 00:24:38.949 "rdma_srq_size": 0, 00:24:38.949 "io_path_stat": false, 00:24:38.949 "allow_accel_sequence": false, 00:24:38.949 "rdma_max_cq_size": 0, 00:24:38.949 "rdma_cm_event_timeout_ms": 0, 00:24:38.949 "dhchap_digests": [ 00:24:38.949 "sha256", 00:24:38.949 "sha384", 00:24:38.949 "sha512" 00:24:38.949 ], 00:24:38.949 "dhchap_dhgroups": [ 00:24:38.949 "null", 00:24:38.949 "ffdhe2048", 00:24:38.949 "ffdhe3072", 00:24:38.949 "ffdhe4096", 00:24:38.949 "ffdhe6144", 00:24:38.949 "ffdhe8192" 00:24:38.949 ] 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "bdev_nvme_attach_controller", 00:24:38.949 "params": { 00:24:38.949 "name": "nvme0", 00:24:38.949 "trtype": "TCP", 00:24:38.949 "adrfam": "IPv4", 00:24:38.949 "traddr": "127.0.0.1", 00:24:38.949 "trsvcid": "4420", 00:24:38.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:38.949 "prchk_reftag": false, 00:24:38.949 "prchk_guard": false, 00:24:38.949 "ctrlr_loss_timeout_sec": 0, 00:24:38.949 "reconnect_delay_sec": 0, 00:24:38.949 "fast_io_fail_timeout_sec": 0, 00:24:38.949 "psk": "key0", 00:24:38.949 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:38.949 "hdgst": false, 00:24:38.949 "ddgst": false 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "bdev_nvme_set_hotplug", 00:24:38.949 "params": { 00:24:38.949 "period_us": 100000, 00:24:38.949 "enable": false 00:24:38.949 } 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "method": "bdev_wait_for_examine" 00:24:38.949 } 00:24:38.949 ] 00:24:38.949 }, 00:24:38.949 { 00:24:38.949 "subsystem": "nbd", 00:24:38.949 "config": [] 00:24:38.949 } 00:24:38.949 ] 00:24:38.949 }' 00:24:38.949 23:25:01 keyring_file -- keyring/file.sh@114 -- # killprocess 85585 00:24:38.949 23:25:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85585 ']' 00:24:38.949 23:25:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85585 00:24:38.949 23:25:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:38.949 23:25:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:38.949 23:25:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85585 00:24:38.949 killing process with pid 85585 00:24:38.950 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.950 00:24:38.950 Latency(us) 00:24:38.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.950 =================================================================================================================== 00:24:38.950 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.950 23:25:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:38.950 23:25:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:38.950 23:25:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85585' 00:24:38.950 23:25:01 keyring_file -- common/autotest_common.sh@967 -- # kill 85585 00:24:38.950 23:25:01 keyring_file -- common/autotest_common.sh@972 -- # wait 85585 00:24:39.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:39.209 23:25:01 keyring_file -- keyring/file.sh@117 -- # bperfpid=85829 00:24:39.209 23:25:01 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85829 /var/tmp/bperf.sock 00:24:39.209 23:25:01 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85829 ']' 00:24:39.209 23:25:01 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:39.209 23:25:01 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.209 23:25:01 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:39.209 23:25:01 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.209 23:25:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:39.209 23:25:01 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:39.209 23:25:01 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:39.209 "subsystems": [ 00:24:39.209 { 00:24:39.209 "subsystem": "keyring", 00:24:39.209 "config": [ 00:24:39.209 { 00:24:39.209 "method": "keyring_file_add_key", 00:24:39.209 "params": { 00:24:39.209 "name": "key0", 00:24:39.209 "path": "/tmp/tmp.olYFrA1O6E" 00:24:39.209 } 00:24:39.209 }, 00:24:39.209 { 00:24:39.209 "method": "keyring_file_add_key", 00:24:39.209 "params": { 00:24:39.209 "name": "key1", 00:24:39.209 "path": "/tmp/tmp.oU84RYq6lP" 00:24:39.209 } 00:24:39.209 } 00:24:39.209 ] 00:24:39.209 }, 00:24:39.209 { 00:24:39.209 "subsystem": "iobuf", 00:24:39.209 "config": [ 00:24:39.209 { 00:24:39.209 "method": "iobuf_set_options", 00:24:39.209 "params": { 00:24:39.209 "small_pool_count": 8192, 00:24:39.209 "large_pool_count": 1024, 00:24:39.209 "small_bufsize": 8192, 00:24:39.209 "large_bufsize": 135168 00:24:39.209 } 00:24:39.209 } 00:24:39.209 ] 00:24:39.209 }, 00:24:39.209 { 00:24:39.209 "subsystem": "sock", 00:24:39.209 "config": [ 00:24:39.209 { 00:24:39.209 "method": "sock_set_default_impl", 00:24:39.209 "params": { 00:24:39.209 "impl_name": "uring" 00:24:39.209 } 00:24:39.209 }, 00:24:39.209 { 00:24:39.209 "method": "sock_impl_set_options", 00:24:39.209 "params": { 00:24:39.209 "impl_name": "ssl", 00:24:39.209 "recv_buf_size": 4096, 00:24:39.209 "send_buf_size": 4096, 00:24:39.209 "enable_recv_pipe": true, 00:24:39.209 "enable_quickack": false, 00:24:39.209 "enable_placement_id": 0, 00:24:39.209 "enable_zerocopy_send_server": true, 00:24:39.209 "enable_zerocopy_send_client": false, 00:24:39.209 "zerocopy_threshold": 0, 00:24:39.209 "tls_version": 0, 00:24:39.209 "enable_ktls": false 00:24:39.209 } 00:24:39.209 }, 00:24:39.209 { 00:24:39.209 "method": "sock_impl_set_options", 00:24:39.209 "params": { 00:24:39.209 "impl_name": "posix", 00:24:39.209 "recv_buf_size": 2097152, 00:24:39.209 "send_buf_size": 2097152, 00:24:39.209 "enable_recv_pipe": true, 00:24:39.209 "enable_quickack": false, 00:24:39.209 "enable_placement_id": 0, 00:24:39.209 "enable_zerocopy_send_server": true, 00:24:39.209 "enable_zerocopy_send_client": false, 00:24:39.209 "zerocopy_threshold": 0, 00:24:39.209 "tls_version": 0, 00:24:39.209 "enable_ktls": false 00:24:39.209 } 00:24:39.209 }, 00:24:39.209 { 00:24:39.209 "method": "sock_impl_set_options", 00:24:39.209 "params": { 00:24:39.209 "impl_name": "uring", 00:24:39.209 "recv_buf_size": 2097152, 00:24:39.209 "send_buf_size": 2097152, 00:24:39.209 "enable_recv_pipe": true, 00:24:39.209 "enable_quickack": false, 00:24:39.210 "enable_placement_id": 0, 00:24:39.210 "enable_zerocopy_send_server": false, 00:24:39.210 "enable_zerocopy_send_client": false, 00:24:39.210 "zerocopy_threshold": 0, 00:24:39.210 "tls_version": 0, 00:24:39.210 "enable_ktls": false 00:24:39.210 } 00:24:39.210 } 00:24:39.210 ] 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "subsystem": "vmd", 00:24:39.210 "config": [] 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "subsystem": "accel", 00:24:39.210 "config": [ 00:24:39.210 { 00:24:39.210 "method": "accel_set_options", 00:24:39.210 "params": { 00:24:39.210 "small_cache_size": 128, 00:24:39.210 "large_cache_size": 16, 00:24:39.210 "task_count": 2048, 00:24:39.210 "sequence_count": 2048, 00:24:39.210 "buf_count": 2048 00:24:39.210 } 00:24:39.210 } 00:24:39.210 ] 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "subsystem": "bdev", 00:24:39.210 "config": [ 00:24:39.210 { 00:24:39.210 "method": "bdev_set_options", 00:24:39.210 "params": { 00:24:39.210 "bdev_io_pool_size": 65535, 00:24:39.210 "bdev_io_cache_size": 256, 00:24:39.210 "bdev_auto_examine": true, 00:24:39.210 "iobuf_small_cache_size": 128, 00:24:39.210 "iobuf_large_cache_size": 16 00:24:39.210 } 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "method": "bdev_raid_set_options", 00:24:39.210 "params": { 00:24:39.210 "process_window_size_kb": 1024 00:24:39.210 } 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "method": "bdev_iscsi_set_options", 00:24:39.210 "params": { 00:24:39.210 "timeout_sec": 30 00:24:39.210 } 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "method": "bdev_nvme_set_options", 00:24:39.210 "params": { 00:24:39.210 "action_on_timeout": "none", 00:24:39.210 "timeout_us": 0, 00:24:39.210 "timeout_admin_us": 0, 00:24:39.210 "keep_alive_timeout_ms": 10000, 00:24:39.210 "arbitration_burst": 0, 00:24:39.210 "low_priority_weight": 0, 00:24:39.210 "medium_priority_weight": 0, 00:24:39.210 "high_priority_weight": 0, 00:24:39.210 "nvme_adminq_poll_period_us": 10000, 00:24:39.210 "nvme_ioq_poll_period_us": 0, 00:24:39.210 "io_queue_requests": 512, 00:24:39.210 "delay_cmd_submit": true, 00:24:39.210 "transport_retry_count": 4, 00:24:39.210 "bdev_retry_count": 3, 00:24:39.210 "transport_ack_timeout": 0, 00:24:39.210 "ctrlr_loss_timeout_sec": 0, 00:24:39.210 "reconnect_delay_sec": 0, 00:24:39.210 "fast_io_fail_timeout_sec": 0, 00:24:39.210 "disable_auto_failback": false, 00:24:39.210 "generate_uuids": false, 00:24:39.210 "transport_tos": 0, 00:24:39.210 "nvme_error_stat": false, 00:24:39.210 "rdma_srq_size": 0, 00:24:39.210 "io_path_stat": false, 00:24:39.210 "allow_accel_sequence": false, 00:24:39.210 "rdma_max_cq_size": 0, 00:24:39.210 "rdma_cm_event_timeout_ms": 0, 00:24:39.210 "dhchap_digests": [ 00:24:39.210 "sha256", 00:24:39.210 "sha384", 00:24:39.210 "sha512" 00:24:39.210 ], 00:24:39.210 "dhchap_dhgroups": [ 00:24:39.210 "null", 00:24:39.210 "ffdhe2048", 00:24:39.210 "ffdhe3072", 00:24:39.210 "ffdhe4096", 00:24:39.210 "ffdhe6144", 00:24:39.210 "ffdhe8192" 00:24:39.210 ] 00:24:39.210 } 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "method": "bdev_nvme_attach_controller", 00:24:39.210 "params": { 00:24:39.210 "name": "nvme0", 00:24:39.210 "trtype": "TCP", 00:24:39.210 "adrfam": "IPv4", 00:24:39.210 "traddr": "127.0.0.1", 00:24:39.210 "trsvcid": "4420", 00:24:39.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.210 "prchk_reftag": false, 00:24:39.210 "prchk_guard": false, 00:24:39.210 "ctrlr_loss_timeout_sec": 0, 00:24:39.210 "reconnect_delay_sec": 0, 00:24:39.210 "fast_io_fail_timeout_sec": 0, 00:24:39.210 "psk": "key0", 00:24:39.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:39.210 "hdgst": false, 00:24:39.210 "ddgst": false 00:24:39.210 } 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "method": "bdev_nvme_set_hotplug", 00:24:39.210 "params": { 00:24:39.210 "period_us": 100000, 00:24:39.210 "enable": false 00:24:39.210 } 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "method": "bdev_wait_for_examine" 00:24:39.210 } 00:24:39.210 ] 00:24:39.210 }, 00:24:39.210 { 00:24:39.210 "subsystem": "nbd", 00:24:39.210 "config": [] 00:24:39.210 } 00:24:39.210 ] 00:24:39.210 }' 00:24:39.210 [2024-07-24 23:25:01.533037] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:39.210 [2024-07-24 23:25:01.533385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85829 ] 00:24:39.210 [2024-07-24 23:25:01.668819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.469 [2024-07-24 23:25:01.802878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.727 [2024-07-24 23:25:01.959085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:39.727 [2024-07-24 23:25:02.024721] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.986 23:25:02 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.986 23:25:02 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:24:39.986 23:25:02 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:39.986 23:25:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:39.986 23:25:02 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:40.244 23:25:02 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:40.244 23:25:02 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:40.244 23:25:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:40.244 23:25:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:40.244 23:25:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:40.244 23:25:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:40.244 23:25:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:40.502 23:25:02 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:40.502 23:25:02 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:40.502 23:25:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:40.502 23:25:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:40.502 23:25:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:40.502 23:25:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:40.502 23:25:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:40.761 23:25:03 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:40.761 23:25:03 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:40.761 23:25:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:40.761 23:25:03 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:41.022 23:25:03 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:41.022 23:25:03 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:41.022 23:25:03 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.olYFrA1O6E /tmp/tmp.oU84RYq6lP 00:24:41.022 23:25:03 keyring_file -- keyring/file.sh@20 -- # killprocess 85829 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85829 ']' 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85829 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85829 00:24:41.022 killing process with pid 85829 00:24:41.022 Received shutdown signal, test time was about 1.000000 seconds 00:24:41.022 00:24:41.022 Latency(us) 00:24:41.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.022 =================================================================================================================== 00:24:41.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85829' 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@967 -- # kill 85829 00:24:41.022 23:25:03 keyring_file -- common/autotest_common.sh@972 -- # wait 85829 00:24:41.284 23:25:03 keyring_file -- keyring/file.sh@21 -- # killprocess 85568 00:24:41.284 23:25:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85568 ']' 00:24:41.284 23:25:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85568 00:24:41.284 23:25:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:24:41.284 23:25:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.284 23:25:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85568 00:24:41.543 killing process with pid 85568 00:24:41.543 23:25:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.543 23:25:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.543 23:25:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85568' 00:24:41.543 23:25:03 keyring_file -- common/autotest_common.sh@967 -- # kill 85568 00:24:41.543 [2024-07-24 23:25:03.789835] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:41.543 23:25:03 keyring_file -- common/autotest_common.sh@972 -- # wait 85568 00:24:42.111 ************************************ 00:24:42.111 END TEST keyring_file 00:24:42.111 ************************************ 00:24:42.111 00:24:42.111 real 0m15.557s 00:24:42.111 user 0m37.916s 00:24:42.111 sys 0m3.269s 00:24:42.111 23:25:04 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:42.112 23:25:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:42.112 23:25:04 -- common/autotest_common.sh@1142 -- # return 0 00:24:42.112 23:25:04 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:24:42.112 23:25:04 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:42.112 23:25:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:42.112 23:25:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.112 23:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:42.112 ************************************ 00:24:42.112 START TEST keyring_linux 00:24:42.112 ************************************ 00:24:42.112 23:25:04 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:42.112 * Looking for test storage... 00:24:42.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e26f5e1a-ae07-4101-a640-4712c9abba53 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e26f5e1a-ae07-4101-a640-4712c9abba53 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:42.112 23:25:04 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.112 23:25:04 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.112 23:25:04 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.112 23:25:04 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.112 23:25:04 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.112 23:25:04 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.112 23:25:04 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:42.112 23:25:04 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:42.112 /tmp/:spdk-test:key0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:42.112 23:25:04 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:42.112 23:25:04 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:24:42.112 23:25:04 keyring_linux -- nvmf/common.sh@705 -- # python - 00:24:42.371 23:25:04 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:42.371 /tmp/:spdk-test:key1 00:24:42.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.371 23:25:04 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:42.371 23:25:04 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85947 00:24:42.371 23:25:04 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:42.371 23:25:04 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85947 00:24:42.371 23:25:04 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85947 ']' 00:24:42.371 23:25:04 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.371 23:25:04 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.371 23:25:04 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.371 23:25:04 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.371 23:25:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:42.371 [2024-07-24 23:25:04.695467] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:42.371 [2024-07-24 23:25:04.695803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85947 ] 00:24:42.371 [2024-07-24 23:25:04.836401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.630 [2024-07-24 23:25:04.965203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.630 [2024-07-24 23:25:05.042444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:43.197 23:25:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.197 23:25:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:43.197 23:25:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:43.197 23:25:05 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.197 23:25:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:43.197 [2024-07-24 23:25:05.669924] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.456 null0 00:24:43.456 [2024-07-24 23:25:05.701890] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.456 [2024-07-24 23:25:05.702338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.456 23:25:05 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:43.456 370379286 00:24:43.456 23:25:05 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:43.456 432214559 00:24:43.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:43.456 23:25:05 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85965 00:24:43.456 23:25:05 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:43.456 23:25:05 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85965 /var/tmp/bperf.sock 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85965 ']' 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.456 23:25:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:43.456 [2024-07-24 23:25:05.787291] Starting SPDK v24.09-pre git sha1 3c25cfe1d / DPDK 24.03.0 initialization... 00:24:43.456 [2024-07-24 23:25:05.787593] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85965 ] 00:24:43.456 [2024-07-24 23:25:05.928816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.715 [2024-07-24 23:25:06.059695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.282 23:25:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.282 23:25:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:24:44.282 23:25:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:44.282 23:25:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:44.541 23:25:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:44.541 23:25:06 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:44.800 [2024-07-24 23:25:07.246007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:24:45.058 23:25:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:45.058 23:25:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:45.058 [2024-07-24 23:25:07.514024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.317 nvme0n1 00:24:45.317 23:25:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:45.317 23:25:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:45.317 23:25:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:45.317 23:25:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:45.317 23:25:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:45.317 23:25:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:45.576 23:25:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:45.576 23:25:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:45.576 23:25:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:45.576 23:25:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:45.576 23:25:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:45.576 23:25:07 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:45.576 23:25:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@25 -- # sn=370379286 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 370379286 == \3\7\0\3\7\9\2\8\6 ]] 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 370379286 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:45.835 23:25:08 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:45.835 Running I/O for 1 seconds... 00:24:46.771 00:24:46.771 Latency(us) 00:24:46.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.771 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:46.771 nvme0n1 : 1.01 12364.98 48.30 0.00 0.00 10290.58 4944.99 14358.34 00:24:46.771 =================================================================================================================== 00:24:46.771 Total : 12364.98 48.30 0.00 0.00 10290.58 4944.99 14358.34 00:24:46.771 0 00:24:47.030 23:25:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:47.030 23:25:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:47.288 23:25:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:47.288 23:25:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:47.288 23:25:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:47.288 23:25:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:47.288 23:25:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:47.288 23:25:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.547 23:25:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:47.547 23:25:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:47.547 23:25:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:47.547 23:25:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:47.547 23:25:09 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:47.547 23:25:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:47.806 [2024-07-24 23:25:10.066292] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:47.806 [2024-07-24 23:25:10.066752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1465460 (107): Transport endpoint is not connected 00:24:47.806 [2024-07-24 23:25:10.067743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1465460 (9): Bad file descriptor 00:24:47.806 [2024-07-24 23:25:10.068739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:47.806 [2024-07-24 23:25:10.068766] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:47.806 [2024-07-24 23:25:10.068778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:47.806 request: 00:24:47.806 { 00:24:47.806 "name": "nvme0", 00:24:47.806 "trtype": "tcp", 00:24:47.806 "traddr": "127.0.0.1", 00:24:47.806 "adrfam": "ipv4", 00:24:47.806 "trsvcid": "4420", 00:24:47.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:47.806 "prchk_reftag": false, 00:24:47.806 "prchk_guard": false, 00:24:47.806 "hdgst": false, 00:24:47.806 "ddgst": false, 00:24:47.806 "psk": ":spdk-test:key1", 00:24:47.806 "method": "bdev_nvme_attach_controller", 00:24:47.806 "req_id": 1 00:24:47.806 } 00:24:47.806 Got JSON-RPC error response 00:24:47.806 response: 00:24:47.806 { 00:24:47.806 "code": -5, 00:24:47.806 "message": "Input/output error" 00:24:47.806 } 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@33 -- # sn=370379286 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 370379286 00:24:47.806 1 links removed 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@33 -- # sn=432214559 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 432214559 00:24:47.806 1 links removed 00:24:47.806 23:25:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85965 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85965 ']' 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85965 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85965 00:24:47.806 killing process with pid 85965 00:24:47.806 Received shutdown signal, test time was about 1.000000 seconds 00:24:47.806 00:24:47.806 Latency(us) 00:24:47.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.806 =================================================================================================================== 00:24:47.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85965' 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 85965 00:24:47.806 23:25:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 85965 00:24:48.064 23:25:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85947 00:24:48.064 23:25:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85947 ']' 00:24:48.064 23:25:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85947 00:24:48.064 23:25:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85947 00:24:48.065 killing process with pid 85947 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85947' 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 85947 00:24:48.065 23:25:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 85947 00:24:48.632 ************************************ 00:24:48.632 END TEST keyring_linux 00:24:48.632 ************************************ 00:24:48.632 00:24:48.632 real 0m6.585s 00:24:48.632 user 0m12.397s 00:24:48.632 sys 0m1.761s 00:24:48.632 23:25:11 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:48.632 23:25:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:48.632 23:25:11 -- common/autotest_common.sh@1142 -- # return 0 00:24:48.632 23:25:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:24:48.632 23:25:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:24:48.632 23:25:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:24:48.632 23:25:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:24:48.632 23:25:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:24:48.632 23:25:11 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:24:48.632 23:25:11 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:24:48.632 23:25:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:48.632 23:25:11 -- common/autotest_common.sh@10 -- # set +x 00:24:48.632 23:25:11 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:24:48.632 23:25:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:24:48.632 23:25:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:24:48.632 23:25:11 -- common/autotest_common.sh@10 -- # set +x 00:24:50.536 INFO: APP EXITING 00:24:50.536 INFO: killing all VMs 00:24:50.536 INFO: killing vhost app 00:24:50.536 INFO: EXIT DONE 00:24:50.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.071 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:51.071 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:51.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.637 Cleaning 00:24:51.637 Removing: /var/run/dpdk/spdk0/config 00:24:51.637 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:51.637 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:51.637 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:51.637 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:51.637 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:51.637 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:51.637 Removing: /var/run/dpdk/spdk1/config 00:24:51.637 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:51.637 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:51.637 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:51.637 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:51.637 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:51.637 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:51.638 Removing: /var/run/dpdk/spdk2/config 00:24:51.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:51.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:51.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:51.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:51.638 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:51.638 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:51.638 Removing: /var/run/dpdk/spdk3/config 00:24:51.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:51.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:51.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:51.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:51.638 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:51.638 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:51.638 Removing: /var/run/dpdk/spdk4/config 00:24:51.896 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:51.896 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:51.896 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:51.896 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:51.896 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:51.896 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:51.896 Removing: /dev/shm/nvmf_trace.0 00:24:51.896 Removing: /dev/shm/spdk_tgt_trace.pid58900 00:24:51.896 Removing: /var/run/dpdk/spdk0 00:24:51.896 Removing: /var/run/dpdk/spdk1 00:24:51.896 Removing: /var/run/dpdk/spdk2 00:24:51.896 Removing: /var/run/dpdk/spdk3 00:24:51.896 Removing: /var/run/dpdk/spdk4 00:24:51.896 Removing: /var/run/dpdk/spdk_pid58755 00:24:51.896 Removing: /var/run/dpdk/spdk_pid58900 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59098 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59179 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59212 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59316 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59334 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59452 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59653 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59799 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59858 00:24:51.896 Removing: /var/run/dpdk/spdk_pid59934 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60025 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60102 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60135 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60171 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60232 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60332 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60763 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60809 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60860 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60881 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60954 00:24:51.896 Removing: /var/run/dpdk/spdk_pid60970 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61037 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61053 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61104 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61122 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61162 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61180 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61307 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61338 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61413 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61470 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61494 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61557 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61593 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61622 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61662 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61691 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61731 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61760 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61800 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61829 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61869 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61898 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61940 00:24:51.896 Removing: /var/run/dpdk/spdk_pid61975 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62009 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62045 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62079 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62114 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62153 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62191 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62225 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62261 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62335 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62424 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62732 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62744 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62786 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62794 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62815 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62834 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62853 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62874 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62893 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62912 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62922 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62952 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62960 00:24:51.896 Removing: /var/run/dpdk/spdk_pid62981 00:24:51.896 Removing: /var/run/dpdk/spdk_pid63000 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63019 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63035 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63059 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63073 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63088 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63124 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63138 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63173 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63231 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63265 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63275 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63303 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63317 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63326 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63368 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63384 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63418 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63426 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63437 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63446 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63456 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63471 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63476 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63490 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63524 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63545 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63560 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63594 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63598 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63611 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63652 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63663 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63695 00:24:52.154 Removing: /var/run/dpdk/spdk_pid63697 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63710 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63723 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63725 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63738 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63746 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63753 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63827 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63880 00:24:52.155 Removing: /var/run/dpdk/spdk_pid63990 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64024 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64069 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64089 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64111 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64126 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64159 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64180 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64250 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64266 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64310 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64392 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64453 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64482 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64574 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64616 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64653 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64873 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64965 00:24:52.155 Removing: /var/run/dpdk/spdk_pid64999 00:24:52.155 Removing: /var/run/dpdk/spdk_pid65315 00:24:52.155 Removing: /var/run/dpdk/spdk_pid65353 00:24:52.155 Removing: /var/run/dpdk/spdk_pid65645 00:24:52.155 Removing: /var/run/dpdk/spdk_pid66054 00:24:52.155 Removing: /var/run/dpdk/spdk_pid66337 00:24:52.155 Removing: /var/run/dpdk/spdk_pid67129 00:24:52.155 Removing: /var/run/dpdk/spdk_pid67951 00:24:52.155 Removing: /var/run/dpdk/spdk_pid68066 00:24:52.155 Removing: /var/run/dpdk/spdk_pid68135 00:24:52.155 Removing: /var/run/dpdk/spdk_pid69407 00:24:52.155 Removing: /var/run/dpdk/spdk_pid69622 00:24:52.155 Removing: /var/run/dpdk/spdk_pid72986 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73302 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73411 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73539 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73567 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73594 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73622 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73714 00:24:52.155 Removing: /var/run/dpdk/spdk_pid73850 00:24:52.155 Removing: /var/run/dpdk/spdk_pid74000 00:24:52.155 Removing: /var/run/dpdk/spdk_pid74075 00:24:52.155 Removing: /var/run/dpdk/spdk_pid74268 00:24:52.155 Removing: /var/run/dpdk/spdk_pid74357 00:24:52.155 Removing: /var/run/dpdk/spdk_pid74451 00:24:52.155 Removing: /var/run/dpdk/spdk_pid74754 00:24:52.155 Removing: /var/run/dpdk/spdk_pid75135 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75137 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75412 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75426 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75440 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75475 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75481 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75784 00:24:52.413 Removing: /var/run/dpdk/spdk_pid75827 00:24:52.413 Removing: /var/run/dpdk/spdk_pid76110 00:24:52.413 Removing: /var/run/dpdk/spdk_pid76316 00:24:52.413 Removing: /var/run/dpdk/spdk_pid76701 00:24:52.413 Removing: /var/run/dpdk/spdk_pid77210 00:24:52.413 Removing: /var/run/dpdk/spdk_pid78033 00:24:52.413 Removing: /var/run/dpdk/spdk_pid78619 00:24:52.413 Removing: /var/run/dpdk/spdk_pid78622 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80512 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80572 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80634 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80694 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80815 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80870 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80936 00:24:52.413 Removing: /var/run/dpdk/spdk_pid80995 00:24:52.413 Removing: /var/run/dpdk/spdk_pid81310 00:24:52.413 Removing: /var/run/dpdk/spdk_pid82478 00:24:52.413 Removing: /var/run/dpdk/spdk_pid82618 00:24:52.413 Removing: /var/run/dpdk/spdk_pid82861 00:24:52.413 Removing: /var/run/dpdk/spdk_pid83414 00:24:52.413 Removing: /var/run/dpdk/spdk_pid83573 00:24:52.413 Removing: /var/run/dpdk/spdk_pid83730 00:24:52.413 Removing: /var/run/dpdk/spdk_pid83826 00:24:52.413 Removing: /var/run/dpdk/spdk_pid83986 00:24:52.413 Removing: /var/run/dpdk/spdk_pid84095 00:24:52.413 Removing: /var/run/dpdk/spdk_pid84747 00:24:52.413 Removing: /var/run/dpdk/spdk_pid84784 00:24:52.413 Removing: /var/run/dpdk/spdk_pid84824 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85077 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85108 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85143 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85568 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85585 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85829 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85947 00:24:52.413 Removing: /var/run/dpdk/spdk_pid85965 00:24:52.413 Clean 00:24:52.413 23:25:14 -- common/autotest_common.sh@1451 -- # return 0 00:24:52.413 23:25:14 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:24:52.413 23:25:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.413 23:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:52.413 23:25:14 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:24:52.413 23:25:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:52.413 23:25:14 -- common/autotest_common.sh@10 -- # set +x 00:24:52.671 23:25:14 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:52.671 23:25:14 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:52.671 23:25:14 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:52.671 23:25:14 -- spdk/autotest.sh@391 -- # hash lcov 00:24:52.671 23:25:14 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:52.671 23:25:14 -- spdk/autotest.sh@393 -- # hostname 00:24:52.672 23:25:14 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:52.672 geninfo: WARNING: invalid characters removed from testname! 00:25:19.207 23:25:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:19.207 23:25:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:22.488 23:25:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:25.017 23:25:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:27.550 23:25:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:30.086 23:25:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:32.618 23:25:54 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:32.618 23:25:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.618 23:25:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:32.618 23:25:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.618 23:25:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.618 23:25:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.618 23:25:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.618 23:25:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.618 23:25:54 -- paths/export.sh@5 -- $ export PATH 00:25:32.618 23:25:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.618 23:25:54 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:32.618 23:25:54 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:32.618 23:25:54 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721863554.XXXXXX 00:25:32.618 23:25:54 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721863554.BSwETC 00:25:32.618 23:25:54 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:32.618 23:25:54 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:32.618 23:25:54 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:32.618 23:25:54 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:32.618 23:25:54 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:32.618 23:25:54 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:32.618 23:25:54 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:32.618 23:25:54 -- common/autotest_common.sh@10 -- $ set +x 00:25:32.618 23:25:54 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:25:32.618 23:25:54 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:32.618 23:25:54 -- pm/common@17 -- $ local monitor 00:25:32.618 23:25:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:32.618 23:25:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:32.618 23:25:54 -- pm/common@25 -- $ sleep 1 00:25:32.618 23:25:54 -- pm/common@21 -- $ date +%s 00:25:32.618 23:25:54 -- pm/common@21 -- $ date +%s 00:25:32.618 23:25:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721863554 00:25:32.618 23:25:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721863554 00:25:32.618 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721863554_collect-vmstat.pm.log 00:25:32.618 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721863554_collect-cpu-load.pm.log 00:25:33.554 23:25:55 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:33.554 23:25:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:33.554 23:25:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:33.554 23:25:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:33.554 23:25:55 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:33.554 23:25:55 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:33.554 23:25:55 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:33.554 23:25:55 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:33.554 23:25:55 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:33.554 23:25:55 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:33.554 23:25:55 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:33.554 23:25:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:33.554 23:25:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:33.554 23:25:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:33.554 23:25:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:33.554 23:25:55 -- pm/common@44 -- $ pid=87685 00:25:33.554 23:25:55 -- pm/common@50 -- $ kill -TERM 87685 00:25:33.554 23:25:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:33.554 23:25:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:33.554 23:25:55 -- pm/common@44 -- $ pid=87686 00:25:33.554 23:25:55 -- pm/common@50 -- $ kill -TERM 87686 00:25:33.554 + [[ -n 5268 ]] 00:25:33.554 + sudo kill 5268 00:25:33.563 [Pipeline] } 00:25:33.583 [Pipeline] // timeout 00:25:33.589 [Pipeline] } 00:25:33.607 [Pipeline] // stage 00:25:33.613 [Pipeline] } 00:25:33.632 [Pipeline] // catchError 00:25:33.642 [Pipeline] stage 00:25:33.645 [Pipeline] { (Stop VM) 00:25:33.661 [Pipeline] sh 00:25:33.942 + vagrant halt 00:25:37.228 ==> default: Halting domain... 00:25:43.824 [Pipeline] sh 00:25:44.104 + vagrant destroy -f 00:25:47.391 ==> default: Removing domain... 00:25:47.403 [Pipeline] sh 00:25:47.683 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:47.692 [Pipeline] } 00:25:47.710 [Pipeline] // stage 00:25:47.714 [Pipeline] } 00:25:47.738 [Pipeline] // dir 00:25:47.744 [Pipeline] } 00:25:47.756 [Pipeline] // wrap 00:25:47.762 [Pipeline] } 00:25:47.773 [Pipeline] // catchError 00:25:47.779 [Pipeline] stage 00:25:47.780 [Pipeline] { (Epilogue) 00:25:47.792 [Pipeline] sh 00:25:48.070 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:54.647 [Pipeline] catchError 00:25:54.649 [Pipeline] { 00:25:54.663 [Pipeline] sh 00:25:54.943 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:54.943 Artifacts sizes are good 00:25:54.951 [Pipeline] } 00:25:54.968 [Pipeline] // catchError 00:25:54.980 [Pipeline] archiveArtifacts 00:25:54.987 Archiving artifacts 00:25:55.206 [Pipeline] cleanWs 00:25:55.217 [WS-CLEANUP] Deleting project workspace... 00:25:55.218 [WS-CLEANUP] Deferred wipeout is used... 00:25:55.224 [WS-CLEANUP] done 00:25:55.226 [Pipeline] } 00:25:55.244 [Pipeline] // stage 00:25:55.250 [Pipeline] } 00:25:55.266 [Pipeline] // node 00:25:55.272 [Pipeline] End of Pipeline 00:25:55.305 Finished: SUCCESS